The login
registered
Write an article
Home page
Download the APP

Introduction to the five IO models of UNIX

andrew7

Introduction to the five IO models of UNIX

IO model Synchronous, asynchronous, blocking, non-blocking Socket blocking and non-blocking, synchronous and asynchronous

Synchronous and asynchronous

Synchronous/asynchronous is mainly for the C-side – synchronization is like a normal page, after the page has sent a post, until the page has received a message back. This whole action is synchronous – asynchronous browsers interacting with the server via Ajax. This step is asynchronous. The difference between synchronous I/O and asynchronous I/O is whether a process is blocked during data access

Blocking and non-blocking

– Blocking The user sends a request to the Web server. The Web server requests the request to the database, waits for the database to return the result, and then returns the data to the user. – Non-blocking A user sends a request to the Web server, the Web server tells the database what I want, the Web server does something else, the database tells the Web server when it’s done (notifications, callbacks, status), the Web comes back to receive the message, and returns the result to the user. The difference between blocking I/O and non-blocking I/O is whether an application call returns immediately – blocking I/O – non-blocking I/O -I/O multiplexing – Signal-driven I/O – Asynchronous I/O The first four are synchronous, but the last is asynchronous. Blocking the I/O model process blocks until the data copy completes and the process hangs, blocking the entire I/O process until the mail carrier arrives


Paste_Image.png



Non-blocking I/O model






Polling takes up CPU




Paste_Image.png

I/O multiplexing mainly consists of SELECT and epoll. For an I/O port, two calls and two returns have no advantages over blocking I/O. It can monitor multiple I/O ports. A waiting mechanism shared by multiple connections is like a mailroom in a community. When new mail arrives, the watchman notifies the responder to pick it up.


Paste_Image.png

In the signal-driven IO model, the signal-driven IO function of the interface is enabled and a signal handler function is performed through sIGATION (the signal call is returned directly and the process continues to work). When the data is ready, a signal is generated notifying the application to fetch the data.


Paste_Image.png

Asynchronous IO tells the kernel to start an operation and lets the kernel notify after the entire operation completes (copying data from the kernel to the user’s own buffer)

The main difference between the signal-driven IO model and the signal-driven IO model is that signal-driven IO is informed by the kernel that it is appropriate to start an IO operation and the client has to wait while performing the IO operation, so it is synchronous. In the asynchronous IO model, the kernel tells us when the I/O has finished. The client does not need to process the I/O, so it is asynchronous.


Paste_Image.png

Summary of others

Blocking IO This is the familiar IO model. When a process performs AN I/O operation, it does not return until data is copied from the kernel space to the user process space. The advantage of this model is that it is simple and the CPU can schedule other processes in the event of a block. When I first looked at non-blocking IO, I thought it was better than blocking IO, but when I looked at the usage, I knew that this model would not be used in practice, only as a theoretical IO model. The idea is that IO operations do not block and return error codes (or other code) if no data is ready. As a result, the consumer is constantly polling to call IO functions. The result is not only the “blocking” effect of blocking IO at the macro level, but also the waste of CPU at the micro level, where CPU is used for polling all the time. Therefore, this model is not as useful as the blocking IO model. I/O reuse my understanding of I/O reuse is as follows: in a single system call, I/O readiness is queried for multiple descriptors — in order to achieve the first point, blocking needs to be shifted according to event notification. Divide one system call into two system calls. The first system call can ask for IO readiness of multiple descriptors, blocking at this point; The second system call, for a descriptor that has IO ready, is theoretically (as I understand it) blocked, but the kernel has prepared the data for a negligible amount of time. Essentially, it’s still blocked. Signal IO As we all know, signals are a way that UNIX provides for communication between processes. The common kill -9 command (kill is a semaphore to the process, and 9 is just one of many signals), or Ctrl + C, signals a process to terminate, and the process exits. As for the signal IO model, I understand it as follows: after the process initiates IO operation, the system calls, and accesses directly, the kernel will inform the process initiating IO operation with a signal when the IO data is ready, so that the signal processing function of the process can read the IO data operation. In essence, this is also the IO model of blocking, because in the signal processing function, there is also blocking, but at this point the system is initiated, the kernel has the data ready. Asynchronous IO This is truly asynchronous IO. The implementation mechanism is as follows: when the user initiates the asynchronous IO system call, the corresponding data processing function will be used as a callback function. When the IO data is ready, the kernel will actively call this callback function. As you can see, the user process makes only one system call under this model, and it returns immediately. Therefore, the process does not block, and thus meets the POSIX definition of asynchronous IO. In fact, I understand the idea is similar to signal IO, the only difference is that for the operation of IO data, asynchronous IO is initiated by the kernel, and signal IO is initiated by the user process.

Recommended readingMore highlights

  • I/O model: Synchronous, asynchronous, blocking and non-blocking. I/O model: synchronous, asynchronous, blocking and non-blocking. Ape Code Read 63,185 comments 55 likes 284
  • Linux asynchronous non-blocking asynchronous jam (transfer) from: http://www.jianshu.com/p/486b0965c296 http://www.jia… Demop Read 1,447 comments 1 like 18
  • Unix network IO model: synchronous asynchronous, silly points not clear? Source blocking IO, non-blocking IO, synchronous IO, asynchronous IO these terms believe there are many friends are also different degrees of confusion? I used to… Yongshun Read 324 Comments 0 upvotes 10
  • Talk about blocking and non-blocking, synchronous and asynchronous, AND I/O models. Source: huangguisu Link: http://blog.csdn.n… Read 658 Comments 0 Likes 10
  • Network programming socket blocking and non-blocking, synchronous and asynchronous, I/O model this paper is formed based on the content of many Internet blogs, the copyright of the quoted content belongs to the original author, only for study and research, shall not be used for any commercial purposes. Almost… Deep red eyes read 680 Comments 0 Likes 5