The importance of computer networks in the IT industry

IT is Internet technology, engaged in work and the network has a great relationship, the front end is responsible for and background (server) interaction, IT must go through the network, so understand some network knowledge is very helpful.

The front end must know the computer network knowledge series article:

  • DNS server and cross-domain problems
  • Layered models of computer networks
  • IP address and MAC address
  • Computer networking knowledge that the front end must know – (cross domain, proxy, local storage)
  • Computer network knowledge the front-end must know — (TCP)
  • What the front end must know about computer networking — (HTTP)
  • Computer network knowledge that the front-end must know — (XSS, CSRF, and HTTPS)

Network model data processing process

The role of transport layer protocols

  • Provides an end to end connection, typically between the front end and back end servers
  • Because the network layer only transfers data, and does not care about success or failure, TCP ensures data reliability in the case of data loss or damage

Classification of transport layer protocols

  • TCP(Transimision Control Protocal) :
  1. A reliable, connection-oriented protocol
  2. Low transmission efficiency
  • User Datagram Protocol (UDP) :
  1. Unreliable, connectionless services
  2. High transmission efficiency

TCP

The function of TCP

To ensure that TCP is a reliable and connection-oriented protocol, it provides the following functions:

  1. If data is segmented and packaged for transmission, large data will be transmitted each time, and the bandwidth is fixed, so it is easy to cause congestion. Imagine a train running on a highway.
  2. Control the sequence of the number of each packet, because the data is segmented packaging transmission, and there is more than one route in the network, and some routes will be delayed, there is no number, so how to ensure the arrival of the original appearance of the data. Imagine transporting a large jigsaw puzzle from one place to another, by multiple routes, and without numbers.
  3. Loss, retransmission and discarding in transportation. Due to the delay of routes in the network and packet loss, there will be retransmission mechanisms to ensure data integrity.
  4. Traffic control Prevents congestion and prevents packet loss due to the sending rate being too fast for the receiver to receive packets.

The TCP header

  • URG => Emergency pointer;
  • ACK => 1 indicates that the confirmation sequence number is valid.
  • PSH => The cache is full. The receiver should send the packet segment to the application layer as soon as possible.
  • RST => The connection is broken.
  • SYN => The SYN sequence number is 1, which is used to initiate a new connection.
  • If FIN is 1, the sender completes the sending task.

16-bit checksum: includes the calculation of TCP headers and binary checksum of data synthesis.

16-bit emergency pointer: valid if URG is 1, the forward offset, plus the ordinal field value to indicate the ordinal of the last byte. Usually used to temporarily break communication (e.g. Ctrl + C).

Three handshakes and four waves

  1. First handshake Host A sends A data segment with the SYN bit to host B to request A connection. Through this data segment, host B tells host B that it wants to establish A connection, needs to reply, and tells host B the start sequence number of the transmission
  2. In the second handshake, host B responds to host A with A data segment with an acknowledgement ACK and SYNC sequence number flag bit. On the one hand, host B sends an ACK to tell host A that it has received the data segment, and on the other hand, informs host A which sequence number to mark from.
  3. The third handshake is when host A acknowledges that it has received the data segment from host B and can begin transmitting the actual data.

The first handshake is basically to confirm the server and confirm that the client can send a signal; In the second handshake, the client confirms that the server can receive and send signals. The third handshake is the server’s confirmation that the client can receive the signal

Four waves:

  1. Host A sends A request for disconnection from the FIN control flag
  2. Host B responds to receive the disconnection request
  3. Host B requests the shutdown in the opposite direction
  4. Host A acknowledges the connection closure request received from host B

The first wave is when the server confirms that the client needs to disconnect. The second wave is for the client to confirm that the server receives the disconnect please; The third wave is when the client confirms that the server has finished sending data and disconnects from the server. The fourth wave is when the server confirms that the client is disconnected and disconnects. Therefore, if the server sends all the data, there is no third wave, and directly enters the fourth wave.

TCP traffic control and TCP congestion control

Windows:

  1. Receiver window RWND: Receiver buffer size. The receiver puts the window value in the window field in the header of the TCP packet and sends it to the sender.
  2. CWND (Congestion window) : Sender buffer size
  3. Sending window SWND: The upper limit of the sending window is Min [RWND, CWND]. When RWND is less than CWND, it is the maximum value of the sending window that the receiving end can receive. When CWND < RWND, it is the maximum value of the congestion limit send window of the network

Difference between congestion control and flow control:

  • Congestion is a global problem that affects all hosts, all routers, and all factors that degrade network traffic performance. Flow control usually refers to the control of point-to-point traffic, which is an end-to-end problem.
  • Flow control is to control the rate at which the sender sends data so that the receiver can receive it in time. Congestion control controls the amount of data injected into the network.
  • The flow window is controlled by the receiver and the congestion window is controlled by the sender

TCP flow control

The so-called flow control is the receiver to let the sender send rate is not too fast, so that the receiver can accept. Using the sliding window mechanism, it is very convenient to control the flow of the sender on the TCP connection. The unit of the TCP window is byte, not packet segment. The sending window of the sender cannot exceed the value of the receiving window provided by the receiver.

  1. The initial window value is 400 bytes, and each packet is 100 bytes. After two requests are sent, the sent but unacknowledged packet seq=201 is 100 bytes. Host B sends the receiving information to host A and adjusts the window size to 300 bytes.
  2. Host A sends 301-500 to host B and resends 201-300. Host B sends the received information to host A and adjusts the window size to 100 bytes
  3. Host A sends 501-600 messages to host B, and host B sends the receiving information to host A. Host B adjusts the window size to 0 and asks host A to pause sending

Suppose that shortly after B sends A packet with RWND =0 to A, B’s receive cache has some storage space. B then sends A segment with rwind=400 to A, but this segment is lost in transmission. A waits for notification of the non-zero window sent by B, and B waits for data sent by A. This is deadlocked. To address this deadlock state, TCP has a persistent timer for each connection. Whenever a TCP connection receives a zero window notification from the other party, it starts a duration timer. If the duration timer expires, it sends a zero window probe segment (with only 1 byte of data), and the other party acknowledges this probe segment with the current window value.

TCP congestion control

Congestion control principle

The principle for the sender to control the congestion window is: as long as there is no congestion on the network, the congestion window is enlarged to send more packets. However, whenever there is congestion on the network, the congestion window is reduced to reduce the number of packets injected into the network.

Congestion control design

From the perspective of control theory, congestion control can be divided into open loop control and closed loop control:

  • Open-loop control is the design of a network in which all factors related to congestion are considered in advance and cannot be corrected halfway through once the system is up and running.
  • Closed-loop control is based on the concept of feedback loop and includes the following measures:
  1. Monitor network systems to detect when and where congestion occurs
  2. Transmit information about congestion to a place where action can be taken
  3. Adjust network system actions to resolve problems as they arise.

Congestion control methods

The Internet recommendation standard RFC2581 defines four algorithms for Congestion control, namely, slow-start, Congestion Avoidance, Fast retransmission and Fast Recovery. We assume:

  1. Data is sent in one direction, while the other direction is only confirmation
  2. The receiver always has plenty of cache space because the size of the send window is determined by the level of congestion on the network.

Slow start algorithm

After the initial TCP connection is established, a large number of packets are sent to the network. As a result, the cache space of the router on the network is exhausted, resulting in congestion. Therefore, the newly established connection cannot send a large number of data packets at the beginning. Instead, the amount of data sent each time can be gradually increased based on the network condition to avoid the above phenomenon. Specifically, when a new connection is created, the CWND initializes to one maximum message segment (MSS) size, and the sender starts sending data at the congestion window size, increasing the CWND size by up to one MSS each time a message segment is acknowledged. In this way, the congestion window CWND is gradually increased. The congestion window size of the number of packet segments is used as an example to illustrate the slow-start algorithm. The real-time congestion window size is expressed in bytes. The diagram below:

Congestion avoidance algorithm

Let the congestion window grow slowly, that is, every round trip time RTT increases the sender’s congestion window CWND by 1 instead of doubling it. So the congestion window grows slowly in a linear fashion.

Slow starts and congestion avoid rotation mechanisms

To prevent network congestion caused by excessive CWND growth, a slow start threshold ssthRESH state variable should also be set. Ssthresh is used as follows:

  • When CWND < SSTHRESH, the slow start algorithm is used.
  • When CWND > SSTHRESH, congestion avoidance algorithm is used instead.
  • When CWND = SSTHRESH, slow start with congestion avoidance algorithm arbitrary.

Multiplication goes down and addition goes up

  • Multiplication reduction: it means that in the slow start stage or congestion avoidance stage, as long as timeout occurs, the slow start threshold will be halved, that is, set to half of the current congestion window (at the same time, the slow start algorithm is implemented). When there is frequent congestion on the network, the SSthRESH value drops rapidly to greatly reduce the number of packets injected into the network.
  • Increase in addition: the congestion window slowly increases after the congestion avoidance algorithm is executed to prevent premature congestion on the network.

Fast retransmission and fast recovery

A TCP connection sometimes idle for a long time because of waiting for the timeout of the retransmission timer, slow start and congestion can not solve this problem well, so the congestion control method of fast retransmission and fast recovery is proposed.

  • The fast retransmission algorithm does not eliminate the retransmission mechanism, but retransmits the lost packet segment earlier in some cases (if the sender receives three duplicate acknowledgement Acks, it determines that the packet is lost and immediately retransmits the lost packet segment without waiting for the retransmission timer to expire). Fast retransmission requires the receiver to send repeated acknowledgements as soon as it receives an out-of-order segment (so that the sender knows early that a segment has not reached the other party) rather than waiting until it sends data with additional acknowledgements. According to the fast retransmission algorithm, the sender should immediately retransmit the unreceived packet segment as long as it receives three consecutive repeated acknowledgements, rather than waiting for the retransmission timer to expire. The diagram below:
  • Fast recovery algorithm:
  1. When the sender receives three consecutive repeated acknowledgements, the multiplication reduction algorithm is performed to halve the SSthRESH threshold. But then the slow start algorithm is not performed.
  2. The sender now assumes that the network may not be congested, considering that it would not receive multiple duplicate acknowledgements if the network were congested. So instead of executing the slow start algorithm, set the CWND to the size of ssTHRESH halved, and then execute the congestion avoidance algorithm. The diagram below:

UDP

UDP applications

Because UDP is an unreliable, connectionless service with high transmission efficiency, UDP applications require real-time data and can allow packet loss. So QQ, video software, TFTP simple file transfer protocol (SMS) and so on are UDP applications.

The realization of the UDP

Because there are some broadcast addresses in the IP address, UDP is mainly through them to achieve the conclusion: IT is the Internet technology, engaged in the work and the network has a great relationship, the front end is responsible for and background (server) interaction, IT must go through the network, so understand some network knowledge has a great help. Next up: HTTP HTTPS

Reference for this article:

  1. Computer network
  2. TCP traffic control and congestion control