The original title of this article is “Reunderstanding BIO and NIO from a practical perspective”. The original article was shared by Object and has been changed for better expression.

1, the introduction

This period of time in looking at some Java NIO and BIO, also see a lot of blog, found that all kinds of theories about the NIO concept said things are clear, can saying is very complete, but after the whole look down, found himself on the NIO or a little knowledge, a face of forced state (please forgive me too stupid).

Based on the above reasons, I have the idea of writing this article. This article does not cover many of the theoretical concepts of Java NIO and Java BIO (see the “Related Articles” section of this article if necessary), but rather summarizes my own insights into Java NIO from a coding practice perspective with code examples. After you have the process of code practice and then go back to the theoretical concepts, you will have a different perspective, I hope to help you understand them!

Term conventions: BIO is what Java programmers call classic blocking IO, and NIO is the NIO (asynchronous IO) added to Java version 1.4.

(This article is simultaneously published at: www.52im.net/thread-2846…)

2. About the author

Author: Object

Personal blog: blog.objectspace.cn/

3. Related articles

In order to avoid explaining too much about the conceptual content of Java NIO and BIO, this article should not mention relevant theoretical knowledge as much as possible. If you are not familiar with the theoretical knowledge of Java NIO and BIO, it is recommended that you read the article on instant Communication website to help you understand this article better.

The Difference between Java NIO and Classic IO in one Minute

The greatest Introduction to Java NIO ever: For those worried about getting started and giving up, read this! (* Recommended)

High Performance Network Programming part 5: Understanding the I/O Model in High Performance Network Programming

High Performance Network Programming (6) : Understanding the Threading Model in High Performance Network Programming

4, first use the classic BIO to achieve a simple single thread network communication program

To understand BIO and NIO, we should first implement a simple, single-threaded server that is not too complex.

4.1 Why single thread as demo

Because one of the differences between BIO and NIO is well illustrated in a single-threaded environment, OF course I will also demonstrate BIO’s so-called one-thread-per-request situation in a real environment.

4.2 Server code

public class Server {

public static void main(String[] args) {

byte[] buffer = new byte[1024];

try{

ServerSocket serverSocket = newServerSocket(8080);

System.out.println(” Server started and listening on port 8080 “);

while(true) {

System.out.println();

System.out.println(” The server is waiting to connect…”) );

Socket socket = serverSocket.accept();

System.out.println(” Server has received a connection request…”) );

System.out.println();

System.out.println(” The server is waiting for data… );

socket.getInputStream().read(buffer);

System.out.println(” server has received data “);

System.out.println();

String content = newString(buffer);

System.out.println(” received data :”+ content);

}

} catch(IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

}

}

4.3 Client code

public class Consumer {

public static void main(String[] args) {

try{

The Socket Socket = newSocket (” 127.0.0.1 “, 8080);

Socket.getoutputstream ().write(” Send data to server “.getBytes());

socket.close();

} catch(IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

}

}

4.4 Code Parsing

We first create a server class that instantiates a SocketServer and binds port 8080. The Accept method is then called to receive the connection request, and the read method is called to receive the data sent by the client. Finally, the received data is printed.

After completing the server design, we implement a client, first instantiate the Socket object, bind IP to 127.0.0.1 (native), port number to 8080, and call write method to send data to the server.

4.5 Running Results

When we start the server, but the client has not yet initiated a connection to the server, the console results are as follows:

When the client starts and sends data to the server, the console results are as follows:

4.6 the conclusion

At the very least, we can see from the above results that the server will block until a client requests to connect to the server due to the accept method after the server starts.

5. Extend the client function

In the previous section, the client logic we implemented was mainly: Set up the Socket – > connect to the server – > send data. Our data is sent immediately after connecting to the server. Now let’s extend the client once we connect to the server. (Note: In this section, the server-side code remains the same.)

5.1 Improved code

public class Consumer {

public static void main(String[] args) {

try{

The Socket Socket = newSocket (” 127.0.0.1 “, 8080);

String message = null;

Scanner sc = newScanner(System.in);

message = sc.next();

socket.getOutputStream().write(message.getBytes());

socket.close();

sc.close();

} catch(IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

}

}

5.2 test

When the server is started and the client has not requested a connection to the server, the console results are as follows:

When the server starts and the client connects to the server but does not send data, the console results are as follows:

When the server starts, the client connects to the server, and sends data, the console results are as follows:

5.3 the conclusion

From the above results, we can see that the server side after starting:

1) Wait for the connection request from the client (first block);

2) If there is no client connection, the server will always block and wait;

3) Then when the client connects, the server waits for the client to send data (second block);

4) If the client does not send data, the server will block waiting for the client to send data.

The server will block twice from starting to receiving data from the client:

1) Block while waiting for connection for the first time;

2) Block the second time while waiting for data.

A very important feature of the BIO is that it blocks twice.

6, BIO

6.1 Weaknesses of BIO in single-threaded conditions

In the last two sections, we used classic Java BIO to implement a simple network communication program that runs on a single thread.

If the server receives a connection and does not receive data from the client, it blocks the read() method and cannot respond to another request from the client. In other words: BIO cannot handle multiple client requests without considering multithreading.

6.2 How does the BIO Handle Concurrency

In the server implementation above, we implemented a single-threaded version of the BIO server. It is not hard to see how the single-threaded version of the BIO can handle multiple client requests.

It’s not hard to think of: We only need in each connection request arrives, to create a thread to execute the connection request, and can handle multiple client requests in the BIO, which is why one of the concepts of BIO is the server implementation pattern for a link to a single thread, namely the client when a connection request server will need to start a thread for processing.

6.3 Multithreading BIO server simple implementation

public class Server {

public static void main(String[] args) {

byte[] buffer = newbyte[1024];

try{

ServerSocket serverSocket = newServerSocket(8080);

System.out.println(” Server started and listening on port 8080 “);

while(true) {

System.out.println();

System.out.println(” The server is waiting to connect…”) );

Socket socket = serverSocket.accept();

newThread(newRunnable() {

@Override

publicvoidrun() {

System.out.println(” Server has received a connection request…”) );

System.out.println();

System.out.println(” The server is waiting for data… );

try{

socket.getInputStream().read(buffer);

} catch(IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

System.out.println(” server has received data “);

System.out.println();

String content = newString(buffer);

System.out.println(” received data :”+ content);

}

}).start();

}

} catch(IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

}

}

6.4 Running Results

Obviously, the state of our server is now one thread per request, in other words, the server creates a thread for each connection request to process.

6.5 Disadvantages of multi-threaded BIO server

The multithreaded BIO server addresses the problem that single-threaded BIO cannot handle concurrency, but it also presents a problem: If there are a large number of requests that connect to our server but do not send messages, our server will also create a separate thread for those requests that do not send messages, so if the number of connections is small, the number of connections can be extremely stressful on the server.

So: if there are a lot of these inactive threads, we should adopt a single-threaded solution, but the single-thread can’t handle concurrency, which leads to a paradoxical state, hence NIO.

7, NIO

Off-topic: If you don’t know enough about Java NIO theory, you are advised to read these two articles first, “Less verbose! One minute will take you to read the difference between Java NIO and classical IO”, “The history of the most powerful Java NIO introduction: Worry about starting to give up, please read this article!” .

7.1 Introduction of NIO

Let’s take a look at the code for the BIO server in single-threaded mode. In fact, the most fundamental problem NIO needs to solve is the two blocks that exist in the BIO: the block while waiting for connections and the block while waiting for data.

public class Server {

public static void main(String[] args) {

byte[] buffer = new byte[1024];

try{

ServerSocket serverSocket = newServerSocket(8080);

System.out.println(” Server started and listening on port 8080 “);

while(true) {

System.out.println();

System.out.println(” The server is waiting to connect…”) );

// Block 1: blocks while waiting to connect

Socket socket = serverSocket.accept();

System.out.println(” Server has received a connection request…”) );

System.out.println();

System.out.println(” The server is waiting for data… );

// Block 2: blocks while waiting for data

socket.getInputStream().read(buffer);

System.out.println(” server has received data “);

System.out.println();

String content = new String(buffer);

System.out.println(” received data :”+ content);

}

} catch(IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

}

}

The old point is that if a single-threaded server is blocked while waiting for data, it cannot respond to a second connection request when it arrives. If it is a multi-threaded server, then there will be a large number of idle requests to generate new threads, resulting in threads occupying system resources, thread waste.

The problem then shifts to how a single-threaded server can receive new client connections while waiting for client data to arrive.

7.2 Simulate the NIO solution

If you want to solve the problem mentioned above that the single-threaded server is blocked when receiving data and cannot receive new requests, you can actually make the server not block while waiting for data and the problem will be solved.

[First solution (no blocking while waiting for connection and data)] :

public class Server {

public static void main(String[] args) throws InterruptedException {

ByteBuffer byteBuffer = ByteBuffer.allocate(1024);

try{

//Java classes set for non-blocking

ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();

serverSocketChannel.bind(newInetSocketAddress(8080));

// Set it to non-blocking

serverSocketChannel.configureBlocking(false);

while(true) {

SocketChannel socketChannel = serverSocketChannel.accept();

if(socketChannel==null) {

// Indicates that there is no connection

System.out.println(” Waiting for client to connect…”) );

Thread.sleep(5000);

}else{

System.out.println(” Currently received client connection request…”) );

}

if(socketChannel! =null) {

// Set it to non-blocking

socketChannel.configureBlocking(false);

byteBuffer.flip(); // Switch mode write –> read

int effective = socketChannel.read(byteBuffer);

if(effective! = 0) {

String content = Charset.forName(“utf-8”).decode(byteBuffer).toString();

System.out.println(content);

}else{

System.out.println(” No client message is currently received “);

}

}

}

} catch(IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

}

}

Running results:

Code parsing:

As you can see, in this solution, while receiving a client message without blocking, the server starts receiving requests again, and before the user has time to enter the message, the server moves on to receiving requests from other clients. In other words, the server loses the request from the current client.

[Solution 2 (caching Socket, polling data ready)] :

public class Server {

public static void main(String[] args) throws InterruptedException {

ByteBuffer byteBuffer = ByteBuffer.allocate(1024);

List<SocketChannel> socketList = newArrayList<SocketChannel>();

try{

//Java classes set for non-blocking

ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();

serverSocketChannel.bind(newInetSocketAddress(8080));

// Set it to non-blocking

serverSocketChannel.configureBlocking(false);

while(true) {

SocketChannel socketChannel = serverSocketChannel.accept();

if(socketChannel==null) {

// Indicates that there is no connection

System.out.println(” Waiting for client to connect…”) );

Thread.sleep(5000);

}else{

System.out.println(” Currently received client connection request…”) );

socketList.add(socketChannel);

}

for(SocketChannel socket:socketList) {

socket.configureBlocking(false);

int effective = socket.read(byteBuffer);

if(effective! = 0) {

byteBuffer.flip(); // Switch mode write –> read

String content = Charset.forName(“UTF-8”).decode(byteBuffer).toString();

System.out.println(” Received message :”+content);

byteBuffer.clear();

}else{

System.out.println(” No client message is currently received “);

}

}

}

} catch(IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

}

}

Running results:

Code parsing:

In solution 1, we used a non-blocking approach, but found that once it was non-blocking, we would not block while waiting for the client to send the message, but would simply reacquire the connection request from the new client, which would cause the client connection to be lost.

In solution 2, we store the connection in a list collection, poll each time we wait for a client message to see if it is ready, and print the message if it is.

As you can see, we did not start the second thread at all. Instead, we used a single thread to handle multiple client connections, which is a perfect solution to the BIO’s inability to handle multiple client requests in single-threaded mode, and solved the problem of connection loss in non-blocking mode.

7.3 Existing Problems (Solution 2)

As you can see from the results just now, the message is not lost and the program is not blocked.

But, on the way of receiving messages there may be some wrong, we have adopted a polling to receive messages, polling every time all connections, whether the message is ready, the test cases just three connection, so don’t see what problem, but we assume that there are 10 million links, or more, with the method of the polling efficiency is very low.

On the other hand, out of 10 million connections, we may only have 1 million messages and the remaining 9 million will not send any messages, so those linkers still have to poll every time, which is obviously not appropriate.

7.4 How can I Solve this problem in real NIO

In real NIO, instead of polling at the Java level, we would delegate the polling to our operating system by making an operating system-level system call (select, or epoll in Linux) to the select function. Actively aware of the socket with data.

For this knowledge, the following articles are recommended:

High Performance Network Programming part 5: Understanding the I/O Model in High Performance Network Programming

High Performance Network Programming (6) : Understanding the Threading Model in High Performance Network Programming

8. The difference between using SELECT /epoll and polling directly at the application layer

We implemented a before use Java to do multiple clients to connect polling logic, but in the real NIO source code is not really so, NIO USES the operating system at the bottom of the polling system calls the select/epoll (Windows: select, Linux: epoll), So why not just implement it instead of calling the system to do polling?

8.1 Select Underlying Logic

Suppose there are five connections A, B, C, D, and E connecting to the server at the same time. According to our design above, the program will traverse these five connections, poll each connection, and obtain the data readiness of each connection. What is the difference between this program and our own program?

First: the nature of the Java program we write also needs to call system functions when polling each Socket, so polling is called once, which will cause unnecessary context switching overhead.

Select will copy one of the five requests from the user space to the kernel space, and determine whether the data is ready for each request in the kernel space, completely avoiding frequent context switching. So the efficiency is much higher than if we write polling directly in the application layer.

If: select does not find the request with data, it will always block (yes, select is a blocking function). If one or more requests have data ready, select will set the file descriptor with data, and select will return. Return and iterate to see which request has data.

Disadvantages of SELECT:

1) The underlying storage relies on Bitmap and has an upper limit of 1024 requests;

2) File descriptors are set, so if the set file descriptor needs to be reused, it needs to be reassigned with a null value;

3) Copying fd (file descriptor) from user mode to kernel mode still has an overhead;

4) After the select return, we need to iterate again to know which request has data.

8.2 Poll Function underlying logic

Poll works much like SELECT. Let’s look at a structure used inside a poll.

struct pollfd{

int fd;

short events;

short revents;

}

Poll also copies all requests to the kernel state. Like SELECT, poll is a blocking function. Pollfd sets events or Revents when one or more requests have data, but pollFD sets events or Revents instead of fd itself. So there is no need to reassign the null value the next time it is used. Poll internal storage does not rely on bitmaps, but instead uses a data structure of a PollFD array, which must be larger than 1024. Select 1, 2.

8.3 Underlying logic of the epoll function

Epoll is one of the latest multiplex IO multiplexing functions. Here are just some of its features.

The biggest difference between epoll and the above two functions is that its FD is shared between user and kernel, so there is no need to make a copy from user to kernel, which saves system resources.

In addition, in SELECT and poll, if the data for a request is ready, they return all the requests, and the program iterates to see which requests have data. However, epoll returns only the requests that have data. This is because epoll first performs a reordering operation when it discovers that a request has data. By putting all FDS with data to the front and returning (N as the number of existing data requests), our upper layer program can not poll all requests, but simply iterate over the first N requests returned by epoll, all of which have data.

The above knowledge about high-performance threading, network IO model can be read in detail in the following several articles:

High Performance Network Programming part 5: Understanding the I/O Model in High Performance Network Programming

High Performance Network Programming (6) : Understanding the Threading Model in High Performance Network Programming

Summary of concepts of BIO and NIO in Java

Some articles usually put concepts at the beginning, but THIS time I choose to put concepts at the end, because through the above practice, I believe that you have some understanding of the Java BIO and NIO, this time should be a better understanding of the concept.

To understand the concept, let’s take a bank withdrawal as an example:

1) Synchronization: go to the bank with your bank card to withdraw money (Java handles IO reading and writing by itself when using synchronous IO);

3) asynchronous: entrust a younger brother to take the bank card to the bank to withdraw money, and then give you (using asynchronous IO, Java will entrust THE IO read and write to OS processing, need to send the data buffer address and size to OS(bank card and password), OS needs to support asynchronous IO API);

3) blocking: ATM queues to withdraw money, and you have to wait (with blocking IO, Java calls will block until the read and write is complete before returning);

4) Non-blocking: You can ask the lobby manager if the queue has arrived. If the lobby manager says it hasn’t arrived, you can’t go. (When using non-blocking IO, if you can’t read and write Java calls, it will return immediately. Read/write continues when the IO event dispatcher informs the read/write, repeating the loop until the read/write is complete.

Java support for BIO, NIO:

1) Java BIO (Blocking I/O) : sync and block, the server implementation mode is one connection one thread, that is, when the client has a connection request, the server needs to start a thread to process, if the connection does not do anything, it will cause unnecessary thread overhead, of course, can be improved through the thread pool mechanism;

2) Java NIO (non-blocking I/O) : synchronous non-blocking, the server implementation mode is one request one thread, that is, the connection request sent by the client will be registered with the multiplexer, and the multiplexer will only start a thread to process the connection when there is an I/O request.

BIO and NIO application scenario analysis:

1) BIO mode: it is suitable for the architecture with small and fixed number of connections. This mode has high requirements on server resources and concurrency is limited to applications. The only choice before JDK1.4, but the program is intuitive, simple and easy to understand;

2) NIO mode: it is suitable for the architecture with a large number of connections and relatively short connections (light operation), such as chat server. The concurrency is limited to the application, and the programming is complicated. It is supported in JDK1.4.

10. Summary of this paper

This article introduces some of the JavaBIO and NIO from the point of view of their own practical understanding, I personally believe that this understanding of BIO and NIO will be more than just look at the concept of a deeper understanding, I also hope that you can go to type, through the results of the program to get their own understanding of JavaBIO and NIO.

Appendix: more information on NIO and network programming

[1] NIO asynchronous network programming Data:

The Principle of Java New Generation Network Programming Model AIO and Introduction to Linux System AIO

“11 Questions and Answers to why YOU Choose Netty”

Gossip on open Source NIO Frameworks: Did MINA or Netty come first?

Netty or Mina: In Depth and Comparison, Part 1

Netty or Mina: In Depth and Comparison, Part II

NIO Framework Introduction (I) : Server based on Netty4 UDP Bidirectional communication Demo

NIO Framework Introduction (ii) : Server based ON MINA2 UDP Bidirectional communication Demo

NIO Framework Introduction (3) : iOS, MINA2, Netty4 cross-platform UDP Two-way Communication Actual combat

NIO Framework Introduction (4) : Android and MINA2, Netty4 cross-platform UDP two-way communication actual combat

Netty 4.x Learning (1) : ByteBuf in Detail

Netty 4.x learning (2) : Channel and Pipeline Details

Netty 4.x Learning part 3: The Threading Model

Apache Mina Framework Advanced Part 1: IoFilter Details

Apache Mina Framework Advanced Part ii: IoHandler Details

Summary of MINA2 Thread Principle (with simple test example)

Apache MINA2.0 Development Guide (Chinese Version)

MINA, Netty source code (online reading version) collated and released

Solving the problem of TCP sticking and missing packets in MINA Data Transmission (source code available)

“Solving the problem of multiple Instances of Filter of the same type in Mina”

Practice Summary: The Pits encountered when upgrading Netty4.x (Thread)

Practice Summary: Netty3.x vs. Netty4.x Threading Model

Netty Security: An Introduction to the principles, code Demo (Part 1)

Netty Security: An Introduction to the principles, code Demo (Part 2)

How Netty exits gracefully

NIO Framework: Netty’s Approach to High Performance

Twitter: How to Use Netty 4 to Reduce JVM GC Overhead

Absolute Dry Goods: Technical Essentials of Netty-based Mass Access Push Service

“Netty Dry Goods Sharing: Jingdong Jingmai production grade TCP Gateway Technology Practice Summary”

Getting Started: The most Thorough Analysis of Netty high-performance Principles and Frameworks so far

For Beginners: Learning Methods and Advanced Strategies for Netty, the Java High-performance NIO Framework

The Difference between Java NIO and Classic IO in one Minute

The greatest Introduction to Java NIO ever: For those worried about getting started and giving up, read this!

“Hand to hand teach you to use Netty to realize the heartbeat mechanism of network communication program, disconnection reconnection mechanism”

Java BIO and NIO are difficult to understand?

>> More similar articles……

[2] Network programming basics:

TCP/IP detail – Chapter 11 UDP: User Datagram Protocol

Chapter 17: TCP: Transmission Control Protocol

Chapter 18: Establishment and Termination of TCP Connections

TCP/IP Detail – Chapter 21: TCP Timeouts and Retransmission

Once upon a Time: The TCP/IP Protocol that Changed the World

“Understanding TCP in Depth (PART 1) : Fundamentals”

In – Depth understanding of TCP protocol (part 2) : RTT, sliding Windows, congestion Handling

Classic Theory: TCP three-way handshake and Four-way Wave

Wireshark Packet Capture Analyzing TCP Three-Way Handshake and Four-Way Wave

Computer Network Communication Protocol Diagram (Chinese Collector edition)

What is the maximum size of a packet in UDP?

P2P technical details (A) : NAT details – detailed principle, P2P introduction

Peer-to-peer NAT traversal (hole drilling)

P2P technology details (3) : P2P technology STUN, TURN, ICE details

Easy to Understand: A Quick understanding of NAT Penetration in P2P Technology

“High Performance Network Programming (1) : How many Concurrent TCP connections can a Single server Have?”

“High Performance Network Programming part II: The Famous C10K Concurrent Connection Problem in the Last Decade”

High Performance Network Programming (PART 3) : The Next 10 Years, It’s Time to Consider C10M Concurrency

“High Performance Network Programming (IV) : Theoretical Exploration of High Performance Network Applications from C10K to C10M”

High Performance Network Programming part 5: Understanding the I/O Model in High Performance Network Programming

High Performance Network Programming (6) : Understanding the Threading Model in High Performance Network Programming

Java BIO and NIO are hard to understand? Follow the code examples and re-understand them!

“Unknown Network Programming (1) : A Simple Analysis of the DIFFICULT Problems in TCP Protocol (part 1)”

“Unknown Network Programming (II) : A Brief Analysis of the DIFFICULT Problems in TCP Protocol (Part II)”

TIME_WAIT and CLOSE_WAIT when closing TCP connections

“Network Programming under the radar (Part 4) : A Deep Dive into TCP’s Abnormal Shutdown.”

Hidden Network Programming part 5: UDP Connectivity and Load Balancing

Unknown Network Programming (6) : Understanding UDP in Depth and Using it Well

Unknown Network Programming (7) : How to Make unreliable UDP Reliable?

Network Programming under the Radar (Part 8) : Deep Decryption of HTTP from the Data Transport Layer

Unknown Network Programming (9) : Connecting Theory with Practice, Understanding DNS in all Directions

Network Programming Lazy Introduction part 1: Quick Understanding of Network Communication Protocols part 1

“Network Programming Lazy Introduction ii: A Quick Understanding of Network Communication Protocols (Part II)”

“Network programming lazy Introduction (3) : A quick understanding of TCP protocol is enough.”

Network Programming Slacker’s Guide (part 4) : Quickly Understand the Difference between TCP and UDP

A Quick Look at why UDP Sometimes Has an Advantage over TCP

“Network programming lazy introduction (six) : The history of the most popular hub, switch, Router function principle introduction”

Web Programming Slacker’s Guide (7) : Understand HTTP in a Nutshell

“Network programming lazy introduction (8) : How to write tcp-based Socket long connection”

Web Programming for lazy People (9) : Why use MAC Addresses when you have IP Addresses?

Web Programming For lazy People (10) : Quick Reading of the QUIC Protocol in the Time it takes to pee

Technology Literacy: A New Generation of UdP-based Low Latency Network Transport Layer Protocol — QUIC In Detail

Making the Internet Faster: Sharing the technology practice of the new generation OF QUIC protocol in Tencent

Summary of Optimization Methods for Short Connections in Modern Mobile Networks: Request Speed, Weak Network Adaptation, and Security Guarantee

Talk about long connections in iOS

Mobile IM Developers must Read (1) : Easy to Understand the “weak” and “Slow” mobile Web

Mobile IM Developers must Read (ii) : Summary of the Most Complete Mobile Weak Network Optimization Methods ever

Detailed explanation of IPv6 Technology: Basic Concepts, Application Status, Technical Practice (Part I)

Detailed explanation of IPv6 Technology: Basic Concepts, Application Status, Technical Practice (Part 2)

From HTTP/0.9 to HTTP/2: Understanding the History and Design of the HTTP Protocol

Learn TCP three handshakes and four waves with Animation

Introduction to Brain-dead Network Programming (II) : What are We Reading and writing when We read and write sockets?

Introduction to Brain-dead Network Programming (3) : Some must-know HTTP protocols

Introduction to Web Programming (4) : A Quick Understanding of HTTP/2 Server Push

Introduction to Brain-dead Network Programming (5) : The Ping command you Use every Day, What is it?

Introduction to Network Programming (6) : What is public IP and internal IP? What is NAT?

Understanding the Technical Challenges of Real-time Communication by Taking The Network Access Layer Design of Online Game Server as an Example

Going High: The Basics of networking that Good Android Programmers Must Know and Know

Comprehensive Understanding of mobile DNS Domain name Hijacking and other miscellaneous diseases: Technical principles, Root causes, Solutions, etc.

Mobile DNS Optimization practice of Meitu App: HTTPS Request Time Reduced by Nearly half

UDP and TCP: An Android Programmer’s Must-know Network Communication Transport Layer Protocol

Introduction to Zero-Base Communication Technology for IM Developers (PART 1) : 100 Years of Communication Switching Technology (Part 1)

Introduction to Zero-Base Communication Technology for IM Developers (PART 2) : 100 Years of Communication Switching Technology (Part 2)

Introduction to Zero-Base Communication Technology for IM Developers (III) : The Century-old Change of Chinese Communication Mode

Introduction to Zero-Base Communication technology for IM Developers (iv) : Evolution of The Mobile Phone, The Most Comprehensive History of Mobile Terminals in History

Introduction to Zero-Base Communication Technology for IM Developers (5) : 1G to 5G, 30 Years of Mobile Communication Technology Evolution

Introduction to Zero-Base Communication Technology for IM Developers (6) : Contact Person of Mobile Terminal — “Base Station” Technology

Introduction to Zero-Base Communication Technology for IM Developers (7) : The Swift Horse of Mobile Terminal — “Electromagnetic Wave”

Introduction to Zero-Based Communication Technology for IM Developers (8) : Zero-Based, The Most Powerful “Antenna” Theory Literacy

Introduction to Zero-Base Communication Technology for IM Developers (9) : The Core Network — the Backbone of Wireless Communication Networks

Introduction to Zero-Base Communication Technology for IM Developers (10) : Zero-Base, Greatest 5G Literacy

Introduction to Zero-Base Communication technology for IM Developers (11) : Why WiFi Signal is Bad?

Introduction to Basic Communication technology for IM Developers (12) : Access to network traffic? Network Down? Get it!

Introduction to Zero-Base Communication technology for IM Developers (13) : Why Cell phone Signal is Bad?

How hard is wireless Internet access on high-speed Trains?

Introduction to Zero-Base Communication technology for IM Developers (Part 15) : Understanding Location Technology in one article

Baidu APP Mobile Terminal Network In-depth Optimization Practice Sharing (I) : DNS Optimization

Baidu APP Mobile Terminal Network In-depth Optimization Practice Sharing (II) : Network Connection Optimization chapter

Baidu APP Mobile Terminal Network In-depth Optimization Practice Sharing (III) : Mobile Terminal Weak Network Optimization Chapter

Sharing by Chen Shuo: From simple to deep, A Summary of Learning Experience of Network Programming

How Many HTTP Requests can You Make over a TCP connection?

Zhihu Technology Sharing: The Practice of High Performance Long Connection Gateway technology for Ten Million Level Concurrent In Zhihu

>> More similar articles……

(This article is simultaneously published at: www.52im.net/thread-2846…)