Prior knowledge

File descriptor

A file descriptor (fd for short) is formally a non-negative integer. In fact, it is an index value that points to the record table of open files that the kernel maintains for each process. When a program opens an existing file or creates a new file, the kernel returns a file descriptor to the process. In programming, some low-level programming tends to revolve around file descriptors. However, the concept of file descriptors is usually only applicable to operating systems such as UNIX and Linux.

Linux platform everything is a file

In Linux, the kernel treats all external devices as a file. Read/write operations on a file call system commands provided by the kernel and return a FD. Read/write operations on a socket also have corresponding descriptors, called socketfd (Socket descriptor). A descriptor is actually a number that points to a structure (file path, data area, etc.) in the kernel. As shown in the figure below

Three tables are created to maintain file descriptors

  • The file descriptor table at the process level
  • System-level file descriptor table
  • I-node table of the file system

In practice we sometimes encounter the problem of “Too many OpenFiles”, which is most likely due to the process having Too few file descriptors available. Many times, however, it is not because the process has too few file descriptors available, but rather because of a bug that opens a large number of file connections (web connections can also hog file descriptors) and does not release them. The fundamental way to solve “Too many Open Files” is to release the application resources in time after they are used up.

Userspace and kernel space, kernel-state and user-state

User space vs. kernel space, process context vs. interrupt context

Today’s operating systems use virtual storage, so for 32-bit operating systems, the addressing space (virtual storage) is 4G (2 ^ 32). The core of the operating system is the kernel, which is independent of ordinary applications and has access to the protected memory space as well as all permissions to access the underlying hardware devices. To ensure that user processes cannot directly operate the kernel and ensure kernel security, the operating system divides the virtual space into two parts, one is the kernel space and the other is the user space. For Linux operating systems (32-bit operating systems as an example)

  • The highest gigabyte (from virtual address 0xC0000000 to 0xFFFFFFFF) is used by the kernel, called kernel space;
  • The lower 3G bytes (from virtual addresses 0x00000000 to 0xBfffff) are used by individual processes, called user space.

Each process can enter the kernel through a system call, so the Linux kernel is shared by all processes within the system. Thus, from a process-specific perspective, each process can have 4 gigabytes of virtual space.

  • When a task (process) executing a system call is trapped in kernel code, the process is said to be in kernel running state (kernel-state). The processor executes in kernel code with the highest privilege (level 0). When a process is in kernel mode, the kernel code executed uses the kernel stack of the current process. Each process has its own kernel stack.
  • When a process is executing the user’s own code, it is said to be in user mode (user mode). At this point the processor runs in the lowest privilege level (level 3) of user code. When executing a user program is suddenly interrupted by an interrupt program, the user program can also be said to be in kernel state symbolically. Because the interrupt handler will use the kernel stack of the current process.

IO model

According to the classification of IO models by UNIX network programming, UNIX provides the following five IO models.

The block typeIO(Blocking IO)

The most popular IO operation is Blocking IO. Taking UDP datagram socket as an example, the following is the call process of blocking IO:

The figure above has a recvfrom call. What is this? Recvfrom is a C function, also known as a Linux kernel function, so you can imagine that no matter what language we write the application, the final call will execute the operating system kernel function. The recvFROM function, on the other hand, roughly translates to receiving data from the (connected) socket and capturing the address of the source from which the data was sent. If there are no messages to read on the socket, the receive call waits for messages unless the socket has been set to non-blocking mode.

As shown in the figure above, recvFROM blocks the process and is a blocking function. The system call does not return until the packet arrives and is copied to the buffer of the application process or an error occurs. During this time, the process is blocked the entire time from the time recvFROM is called until it returns. So it’s called the blocking IO model. As mentioned above, blocking I/O remains blocked if the request cannot be completed immediately. Blocking I/O is divided into two phases.

  • Phase 1: Wait for data to be ready. Network I/O waits for remote data to arrive. The case of disk I/O is waiting for disk data to be read from disk into kernel-mode memory.
  • Phase 2: Data copy. For system security reasons, user-mode programs do not have permission to read kernel-mode memory directly, so the kernel is responsible for copying data from kernel-mode memory into user-mode memory.

With traditional blocking I/O, an operation on a file descriptor (FD) will wait until the kernel responds if the operation does not respond. The disadvantage is that a single thread can only operate one FD at a time

Non-blocking IO model

Non-blocking IO model, as shown in the figure below:

A non-blocking I/O request consists of the following three phases

  • Phase 1: Setting the socket to NONBLOCK tells the kernel not to put the thread to sleep when the requested I/O cannot be completed, but to return an error code (EWOULDBLOCK) so that the request is not blocked.
  • Phase 2: THE I/O operation functions will continuously test if the data is ready, and if not, continue testing until the data is ready. In the whole I/O request process, although the user thread can immediately return after each I/O request, it still needs to continuously poll and repeat the request to wait for data, consuming a large amount of CPU resources.
  • Phase 3: Data is ready to be copied from the kernel to user space.

In summary, when recvFROM is applied to kernel state, if there is no data in the buffer, an EWOULDBLOCK error will be returned. Normally, the non-blocking IO model is polled for the state to see if any data is coming from the kernel, i.e., after the non-blocking recvForm system call is called, The process is not blocked, and the kernel immediately returns to the process. If the data is not ready, an error is returned. After the process returns, it can do something else and then issue the RecvForm system call. Repeat the above process to make recvForm system calls over and over again, a process often referred to as polling. Polling checks the kernel data until it is ready, and then copies the data to the process for data processing. Note that the process is still blocked during the entire process of copying data.

On Linux, you can set the socket to make it non-blocking. Non-blocking IO consumes too much CPU time and spends most of it polling.

IO multiplexing

Multiplexing is actually not a technology but a concept. Before I/O multiplexing, there was frequency division multiplexing and time division multiplexing of communication lines. It was probably a reasonable arrangement of the time and location of each unit using resources, and it seemed that all units were using resources that could only be used by a small number of units simultaneously.

Multiplexing refers to the network connection, and multiplexing refers to the same thread

The word I/O multiplexing multiplexing is mostly used in the field of communications. In order to make full use of communication lines, it is hoped to transmit multiplex signals in a channel. To transmit multiplex signals in a channel, it is necessary to combine the multiplex signals into one way. The device combining multiplex signals into one signal is called multiplexer, and the apparent receiver needs to restore the original multiplexer after receiving the combined signal. This device is called demultiplexer, as shown in the figure:

IO multiplexing model, as shown in the figure below:

There is a select function in the figure above. Let’s explain the function first:

Basic principle: The file descriptor monitored by the SELECT function is divided into three classes, namely writefds, READFDS, and EXCEPtfDS. The select function blocks until a descriptor is ready (data readable, writable, or except), or a timeout (timeout specifies the wait time, or null if returned immediately), and the function returns. When the select function returns, the ready descriptor can be found by iterating through the FDset.

// Return value: number of prepared file descriptors, timeout is 0, error -1.
#include <sys/select.h>
#include <sys/time.h>

#define FD_SETSIZE 1024
#define NFDBITS (8 * sizeof(unsigned long))
#define __FDSET_LONGS (FD_SETSIZE/NFDBITS)

// Data structure (bitmap)
typedef struct {
    unsigned long fds_bits[__FDSET_LONGS];
} fd_set;

// API
int select(
    int max_fd, 
    fd_set *readset, 
    fd_set *writeset, 
    fd_set *exceptset, 
    struct timeval *timeout
)                              // Return the number of ready descriptors

FD_ZERO(int fd, fd_set* fds)   // Clear the collection
FD_SET(int fd, fd_set* fds)    // Add the given descriptor to the collection
FD_ISSET(int fd, fd_set* fds)  // Determine whether the specified descriptor is in the collection
FD_CLR(int fd, fd_set* fds)    // Remove the given descriptor from the file

Copy the code

Select has three parts of parameters

  1. Pass the maximum FD (file descriptor) +1
  2. The FDS passed in fall into three categories
    • 1.) listening to read
    • 2.) listening to write
    • 3). Listening exception
  3. How long can I wait (timeout) if there is no fd that meets the condition?

Select a BitMap of FD_SETSIZE bits to represent the input parameter. FD_SETSIZE defaults to 1024. Since there is no 1024 bit number, it is represented by an array. Since the array elements have consecutive addresses, it is actually a 1024 bit number. For example, the first bit is 1, indicating that the input has Fd1 (fd). This also limits the select to 1024 fd numbers, and the number of fd numbers cannot be greater than or equal to 1024.

A set of file descriptors is stored in an fD_set type, where each bit of an fD_set variable represents a descriptor. We can also think of it as just an array of many bits

In Linux, we can reuse I/O ports using the select function. The parameters passed to the select function tell the kernel:

• The file descriptor we are interested in

• For each descriptor, we care about the state. (Do we want to read or write from a file descriptor, or do we care if there is an exception in a descriptor?)

• How long do we have to wait? (We can wait an infinite amount of time, a fixed amount of time, or not wait at all.)

After returning from select, the kernel tells us the following:

• Number of descriptors that are ready for our requirements

• Which descriptors are ready for the three conditions. (Reading, writing, exception)

The select function tells us how many events are ready when a read or write event occurs, but it doesn’t tell us which events are ready. It’s up to us to go through the event set one by one and decide that with this information returned, we can call the appropriate I/O function (usually read or write). And these functions don’t block anymore.

Select has an O(n) undifferentiated polling complexity, and the more streams processed at the same time, the longer the undifferentiated polling time.

The basic process is shown below

Call order: sys_select() → core_sys_select() → do_select() → fop->poll()

If you are not interested in the above theory, the main advantage of using SELECT is that the user can process MULTIPLE SOCKET I/O requests in a single thread. The user can register multiple sockets, and then continuously call select to read the activated socket, to achieve the purpose of processing multiple I/O requests in the same thread. In the synchronous blocking model, multithreading is necessary to achieve this goal

Let’s look at the select process pseudocode:

Select a socket that is in the available state. You could argue that using SELECT for IO requests is not much different from the synchronous blocking model, and even more inefficient with the additional operations of adding monitoring sockets and calling select. However, the biggest advantage of using SELECT is that the user can process I/O requests from multiple sockets simultaneously in a single thread. If your network traffic is heavy, this mode is better than blocking mode.

Multiplex IO multiplexing (select, poll, epoll). This is also called event driven IO multiplexing in some places.

The advantage of select/epoll is that a single process can handle THE IO of multiple network connections simultaneously. The biggest advantage of I/O multiplexing is its low overhead. The system does not need to create or maintain processes/threads, which greatly reduces the system overhead. The select, poll, epoll function polls all sockets and notifies the user process when data arrives on a socket.

When a user process calls a SELECT, the entire process is blocked, and the kernel “monitors” all select sockets, returning when data is ready in any socket. The user process then calls the read operation to copy data from the kernel to the user process. So, I/O multiplexing is characterized by a mechanism whereby a process can wait for multiple file descriptors at the same time, and select() returns when any one of these file descriptors (socket descriptors) is read ready.

The graph above isn’t really that different from the Blocking IO one, in fact, it’s worse. Because there are two SystemCalls (SELECT and RecvFROM), blockingIO calls only one System Call (RecvFROM). However, the advantage of using SELECT is that it can handle multiple Connections simultaneously. So, a Web server using SELECT /epoll does not necessarily perform better than a Web server using multi-threading + blocking IO and may have even greater latency if the number of connections being processed is not very high. The advantage of select/epoll is not that it can process individual connections faster, but that it can process more connections.

In IO multiplexing Model, in practice, for each socket, set become non – blocking commonly, however, as shown in the above, the whole process of user is actually has been block. The process is blocked by the select function rather than by the socket IO to the block.

Select is essentially the next step by setting or examining the data structure that holds the FD flag bit. The disadvantages of this are:

  1. The number of FDS that can be monitored by a single process is limited, that is, the number of ports that can be listened on is limited. In general, this number depends a lot on the system memory. You can check the number by cat/proc/sys/fs/file-max. The default value is 1024 for a 32-bit vm. The default value is 2048 for 64-bit machines.
  2. Linear scanning is performed on sockets, that is, polling is adopted, which is inefficient: When there are a large number of sockets, each select() must traverse FD_SETSIZE sockets to complete the scheduling, no matter which socket is active, the traversal is performed. This wastes a lot of CPU time. Polling can be avoided if sockets are registered with a callback function that automatically completes the operation when they are active, which is what EPoll and KQueue do.
  3. The need to maintain a data structure to hold a large number of Fd’s can make replication overhead in user space and kernel space when passing the structure.

Signal-driven I/O model

This mode is rarely used, so I will not focus on it, but give a brief description, as shown in the picture:

To use the I/O model, you need to turn on the socket’s signal-driven I/O function and install a signal-handling function through the SIGAction system call. The sigAction function returns immediately, and our process continues working, that is, the process is not blocked. When the datagram is ready, the kernel generates a SIGIO signal for the process so that we can read the datagram by calling recvFROM in the signal-processing function or in the main loop. Regardless of how SIGIO signals are processed, this model has the advantage of not being blocked while waiting for datagrams to arrive.

The disadvantage of this mode is that signal I/O may not be notified during a large number of I/O operations due to signal queue overflow. Signal-driven I/O, though, is useful for handling UDP sockets, where such signal-driven notifications mean arrival of a datagram, or return of an asynchronous error. For TCP, however, signal-driven I/O is almost useless because there are so many conditions that lead to such notifications, each one of which consumes a lot of resources and loses its advantage over the previous approaches.

Asynchronous IO model

To call aio_read (there are more THAN one AIO API),

Tells the kernel the descriptor, buffer pointer, buffer size, file offset, and notification method, and then returns immediately. When the kernel copies the data to the buffer, it notifies the application. Therefore, in asynchronous I/O mode, phases 1 and 2 are completed by the kernel without the participation of user threads.

The main difference between the asynchronous IO model and the signal-driven IO model is that the signal-driven IO lets the kernel tell us when an IO operation can be started, while the asynchronous IO model lets the kernel tell us when the IO operation is complete.

To compare

So far, we have introduced five IO models, and let’s see how they compare:

As you can see, the main difference between the first four I/O models is in the first phase. The second phase is the same: the process blocks on the RecvFROM system call during data copying from the kernel to the application process’s buffer. In the asynchronous I/O model, the kernel notifies the application process when the operation is complete.


Here is a vivid example from Zhihu to illustrate the relationship between these models.

Old Zhang love tea, nonsense do not say, boiled water.

Characters: Lao Zhang, two kettles (ordinary kettles, referred to as kettles; A ringing kettle for short). Lao Zhang put the kettle on the fire and waited for it to boil. Lao Zhang thought he was a little silly. Lao Zhang put the kettle on and went to the living room to watch TV. From time to time, he went to the kitchen to check whether the water was boiling. (Synchronous non-blocking) Still feeling a bit silly, Zhang went high-end and bought a kettle that whistled. After the water boils, it can make a loud noise of di ~~~~. Lao Zhang put the kettle on the fire and waited for the water to boil. Lao Zhang put the kettle on the fire and went to the living room to watch TV. He stopped looking at the kettle before it rang and went to get the kettle after it rang. (asynchronous non-blocking) Lao Zhang thinks he’s smart.

The so-called synchronous asynchronous, only for the kettle. Common kettle, synchronous; Ring the kettle, asynchronously.

Although they can work, but the kettle can be finished by oneself, the reminder of Lao Zhang’s water is boiling. This is beyond the reach of ordinary kettles.

Synchronization only allows the caller to poll itself (in case 2), making old Zhang inefficient.

The so-called blocking non – blocking, only for Lao Zhang.

Standing Lao Zhang, blocking; Watch TV old Zhang, non-blocking.

In case 1 and case 3, Lao Zhang was blocked, and his daughter-in-law shouted that he did not know. Although 3 rings the kettle is asynchronous, but for waiting Lao Zhang does not have much significance. So asynchrony is usually used in conjunction with non-blocking to make asynchrony work.


Multiplexing select, poll, epoll

The diagram of the multiplexing model mentioned above shows only SELECT, but in fact the implementation of this model can have multiple implementations based on different methods. For example, based on select or poll or epoll methods, how are they different?

select

The FD monitored by the SELECT function is divided into three categories, namely writefDS, READFDS, and EXCEPTFDS. The select function blocks until a FD is ready (read, write, or except), or a timeout (timeout specifies the wait time, or null if returned immediately), and returns. When the select function returns, the ready FD can be found by iterating through the FDset.

Select is currently supported on almost all platforms, and its good cross-platform support is another advantage. One of the biggest drawbacks of SELECT is that there is a limit to how many FDS can be opened by a single process. This is limited by FD_SETSIZE. The default value is 1024.

poll

Poll is essentially the same as SELECT. It copies the array passed in by the user into the kernel space, and then queries the device status for each FD. If the device is ready, it adds an item to the device wait queue and continues traversing. Until the device is ready or times out, it is woken up to iterate over the FD again. This process went through a lot of unnecessary traversal. It has no limit on the maximum number of connections, because it is stored based on linked lists, but it has the following disadvantages:

1 a large number of FD arrays are copied between the user mode and the kernel address space;

2. Another feature of poll is the level trigger. If a FD is reported and not processed, the next poll will report the FD again.

3 when FD is increased, linear scan results in performance degradation.

Another drawback of SELECT and poll is that as the number of FDS increases, only a small number of sockets may be active, but select/poll scans the entire collection linearly with each call, resulting in a linear decrease in efficiency.

Horizontal trigger and edge trigger

Horizontal trigger (level-trggered)

As long as the kernel write buffer associated with the file descriptor is not empty and there is data to read, the kernel sends a readable signal to notify. When the kernel write buffer associated with the file descriptor is insufficient and there is space to write, the kernel sends a writable signal to notify. LT supports both blocking and non-blocking modes. The default mode of epoll is LT.

Edge-triggered

When the read kernel buffer associated with the file descriptor changes from empty to non-empty, a readable signal is sent to notify. When the kernel write buffer associated with the file descriptor changes from full to insufficient, a writable signal is sent to notify. What’s the difference? Horizontal triggering fires readable signals as long as there is data in the read buffer, while edge triggering notifies only once when null becomes non-null.

LT(Leveltriggered) is the default working mode and supports both block and no-blocksocket. In this way, the kernel tells you if a file descriptor is ready, and then you can IO the ready FD. If you do nothing, the kernel will continue to inform you, so this mode is less likely to cause errors. The traditional SELECT /poll is representative of this model.

epoll

Epoll was introduced in the 2.6 kernel and is an enhanced version of the previous SELECT and Poll. Compared to select and poll, epoll is more flexible and has no descriptor constraints. Epoll uses a single file descriptor to manage multiple descriptors, storing the events of the file descriptor for the user relationship into an event table in the kernel so that the copy in user space and kernel space is done only once.

Epoll supports both horizontal and edge triggers. The most important feature of ePoll is the edge trigger, which only tells the process which FDS are ready and only notifies it once. Another feature of epoll is that it registers a FD with epoll_ctl as an event ready notification. Once the FD is ready, the kernel uses a callback mechanism similar to callback to activate the FD and epoll_wait is notified.

A diagram summarizes the entire workflow of epoll

Epoll function interface

#include <sys/epoll.h>

// Data structure
// Each epoll object has a separate EventPoll structure
// Used to store events added to the epoll object by the epoll_ctl method
// epoll_wait to check whether an event has occurred simply checks for epItem elements in the RDList double-linked list in the eventPoll object
struct eventpoll {
    /* The root node of the red-black tree that stores all the events added to epoll to be monitored */
    struct rb_root  rbr;
    /* The double-linked list holds the events that will be returned to the user using epoll_wait to meet the criteria */
    struct list_head rdlist;
};

// API
int epoll_create(int size); // add an ep object in the middle of the kernel, and put all the sockets that need to listen into the EP object
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event); // epoll_ctl is responsible for adding and removing sockets to the kernel red-black tree
int epoll_wait(int epfd, struct epoll_event * events, int maxevents, int timeout);// epoll_wait detects readable queues and blocks processes without readable sockets

Copy the code

Selecat has three problems

  1. The SELECT call requires passing in the FD array and making a copy of it to the kernel, which can be a huge resource drain in high-concurrency scenarios. (Can be optimized to not copy)
  2. Select at the kernel level still checks the ready state of the file descriptor by traversing it, a synchronous process, but without the overhead of system call context switching. (Kernel layer can be optimized for asynchronous event notification)
  3. Select simply returns the number of file descriptors that can be read, which one the user must traverse. (Can be optimized to return only user-ready file descriptors without the user doing invalid traversal)

So Epoll mainly improves on these three points.

  1. A collection of file descriptors is kept in the kernel, and the user doesn’t need to re-pass it each time, just tell the kernel what has been changed.
  2. Instead of polling to find ready file descriptors, the kernel wakes them up with asynchronous IO events.
  3. The kernel only returns file descriptors with IO events to the user, and the user does not need to traverse the entire set of file descriptors.

Specifically, the operating system provides these three functions.

The first step is to create an epoll handleint epoll_create(int size); Second, add, modify, or delete file descriptors to the kernel to monitor.int epoll_ctl(
  int epfd, int op, int fd, struct epoll_event *event); Step three, sort of initiatedselect() callint epoll_wait(
  int epfd, struct epoll_event *events, int max events, int timeout);
Copy the code

The difference between the three models

Select,poll, and epoll:

  • Select has several disadvantages:
    • Every time you call select, you need to copy the fd collection from user mode to kernel mode, which can be expensive if there are many FD’s
    • Also, each call to SELECT requires the kernel to iterate over all FDS passed in, which can be expensive if there are many FDS
    • The number of file descriptors supported by SELECT is too small. The default is 1024
  • The advantages of the epoll:
    • There is no maximum concurrent connection limit, and the maximum number of FDS that can be opened is much higher than 1024 (about 100,000 ports can be listened on 1 GB of memory).
    • Efficiency improvement, not polling, does not decrease as the number of FD’s increases. Callback is called only if an active FD is available; The great advantage of Epoll is that it focuses on your “active” connections, not the total number of connections, so in a real network environment, Epoll is much more efficient than select and poll.
    • On the surface, epoll performs best, but select and poll may perform better than epoll in the case of a small number of connections and very active connections, since epoll’s notification mechanism requires many function callbacks.
  • Select is inefficient because it requires polling every time. But inefficiencies are relative and can be improved by good design, depending on the situation
  • The SELECT, poll implementation needs to continually poll all fd collections itself until the device is ready, possibly alternating between sleep and wake up several times. Epoll also calls epoll_WAIT to poll the ready list, and it may sleep and wake up several times, but when the device is ready, epoll calls the callback function, puts the ready FD into the ready list, and wakes up the process that went to sleep in epoll_wait. Although both sleep and alternate, select and Poll traverse the entire FD collection while awake, while EPoll is awake to determine if the ready list is empty, which saves a lot of CPU time. This is the performance benefit of the callback mechanism.
  • Select, poll copies the fd collection from user to kernel state once each time it is called, and puts current on the device wait queue once. Epoll copies the FD collection once and puts current on the wait queue only once (at the beginning of epoll_wait, Note that the wait queue here is not a device wait queue, just a wait queue defined internally by epoll. It also saves a lot of money.
select poll epoll
Mode of operation traverse traverse The callback
The underlying implementation An array of The list Red and black tree
IO efficiency Each call is linear traversal, order n time. Each call is linear traversal, order n time. When the fd is ready, the system registered callback function will be called. Put the ready FD into the readyList, time complexity O(1).
Maximum number of connections 1024 (x86) or 2048 (x64) There is no upper limit There is no upper limit
Fd copy Each time you call SELECT, you need to copy the FD collection from user state to kernel state Each time poll is called, the FD collection needs to be copied from user state to kernel state When epoll_ctl is called, copy it into the kernel and save it. After that, epoll_wait does not copy it

Extension problem

Why is DATABASE connection pooling not IO multiplexing?

Mp.weixin.qq.com/s/B12jXZTeR…

Library classes

Open source C/C++ network library:

  • ACE C++ language cross-platform
  • Boost’s ASIO C++ language cross-platform
  • The libevent C language mainly supports Linux. The new version adds support for IOCP for Windows
  • Libev C only supports Linux and encapsulates only the EPOLL model

ACE

ACE is a large middleware product, about 200,000 lines of code, too grand, a bunch of design patterns, architecture layer after layer, when using, according to the situation, see you from that layer to use. Cross-platform support.

ACE network libraries in use, has been one of the memory management to get confused about, where the memory allocated to release all don’t know, ACE not the kui is a library of doing research, to say the inside of the packaging design models listed in this book are inside the code implementation, feeling is to use in using Java, If you want to use ACE as your network library, don’t just use it as a network library, use it as a framework. If you want to use ACE only as a network library, don’t use ACE because it is too big and too hard to learn. But if you use it as a framework, you will feel really cool to use it, there are all the things, such as thread pool, memory pool, timer, recursive lock, etc., are very convenient. Boost’s ASIO is much more intuitive in terms of memory management.

Boost

Boost’s ASIO is an asynchronous IO library that encapsulates common Socket operations and simplifies socket-based development. Cross-platform support.

libevent

Libevent is a lightweight open source high-performance network library written in C language. It mainly has the following highlights: Event-driven, high performance; Lightweight, web-focused, not as bulky as the ACE; The source code is fairly concise and readable; Cross-platform, Windows, Linux, BSD and Mac OS support; Supports multiple I/O multiplexing technologies, such as epoll, poll, dev/poll, SELECT, and Kqueue. Support I/O, timer and signal events; Register the event priority.

libev

Libev is a library written in C language that only supports Linux system. When I studied it before, I only encapsulated the EPOLL model. I don’t know whether the new version has been improved. The use method is similar to Libevent, but very simple, the code is a minimum of a library, only a few thousand lines of code. This is obviously not cross-platform, but if you only need to run on Linux, you can use this library.

reference

  • Be sad ithub. Chtistina georgina rossetti.british poetess journey – IO/IO – multiple…
  • Juejin. Cn/post / 688298…
  • Mp.weixin.qq.com/s/3gC-nUnFG…
  • www.cnblogs.com/flashsun/p/…
  • Mp.weixin.qq.com/s/Lhocgdcpb…
  • Mp.weixin.qq.com/s?__biz=MjM…
  • www.loujunkai.club/network/sel…
  • Note.iawen.com/note/progra…