TCP/IP model

Transmission Control Protocol (TCP) /Internet Protocol (IP), which is the core Protocol of the Internet, contains a series of network protocols that form the basis of the Internet.

The TCP/ IP-based reference model divides protocols into four layers, which are link layer, network layer, transport layer and application layer. The following figure shows the comparison between the TCP/IP model and the OSI model.

The TCP/IP protocol family is layered from top to bottom. The top layer is the application layer, which has HTTP, FTP and other familiar protocols. The second layer is the transport layer, where the famous TCP and UDP protocols reside. The third layer is the network layer, where the IP protocol is responsible for adding IP addresses and other data to the data to determine the destination of the transmission. The fourth layer is the data link layer, which adds an Ethernet protocol header to the data to be transmitted and performs CRC coding to prepare for the final data transmission.

The figure above clearly shows the role of each layer in TCP/IP, and the process of TCP/IP communication actually corresponds to the process of data in and out of the stack. In the process of pushing, the data sender continuously encapsulates the head and tail of each layer, adding some transmitted information to ensure that it can be transmitted to the destination. In the process of pushing out the stack, the data receiver demolishes the head and tail of each layer to get the data transmitted at last.

The preceding figure uses HTTP as an example.

Data link layer

The physical layer is responsible for the exchange between 0, 1 bit flow and the voltage level of the physical device, and the flash off of the light. The data link layer is responsible for dividing the sequence of 0 and 1 into data frames transmitted from one node to the adjacent node. These nodes are uniquely identified by MAC (MAC, physical address, a host has a MAC address).

  • Frame encapsulation: Encapsulates a network-layer datagram into a frame by adding a header and a tail. The frame header contains the source MAC address and destination MAC address.
  • Transparent transfer: zero bit fill, escape characters.
  • Reliable transmission: Rarely used on links with low error rates, but WLAN ensures reliable transmission.
  • Error detection (CRC): The receiver detects an error and, if it finds an error, discards the frame.

Third, network layer

1. IP protocol

IP is the core of TCP/IP. All TCP, UDP, IMCP, and IGMP data are transmitted in IP format. It is important to note that IP is not a reliable protocol, that is, the IP protocol does not provide a mechanism for handling data after it is not transmitted, which is what the upper layer protocols are supposed to do: TCP or UDP.

1.1 the IP address

In the data link layer we usually identify different nodes by MAC address, and in the IP layer we also have a similar address identifier, which is the IP address.

32-bit IP address is divided into a network and address, this can reduce the number of router middle of table records, with the network address, can be qualified with the same network address of the terminal are in the same range, then only need to maintain a routing table that the direction of the network address, you can find the corresponding these terminals.

Class A IP address: 0.0.0.0 127.0.0.0 Class B IP address :128.0.0.1 191.255.0.0 Class C IP address :192.168.0.0 to 239.255.255.0

1.2 IP Protocol Header

This section describes only the 8-bit TTL field. This field specifies how many routes the packet must pass through before it is discarded. Each time an IP packet passes through a router, the TTL value of the packet decreases by 1. When the TTL of the packet becomes zero, it is automatically discarded.

The maximum value of this field is 255, which means that a protocol packet will be discarded only 255 times after traversing the router. Depending on the system, this number can be 32 or 64.

2. ARP and RARP

ARP is a protocol for obtaining MAC addresses based on IP addresses.

The ARP (Address resolution) protocol is a resolution protocol. A host does not know which interface corresponds to which host IP address. Before sending an IP packet, the host checks the ARP cache (that is, the cache of the IP-MAC address mapping table).

If the queried IP-MAC pair does not exist, the host sends an ARP broadcast packet to the network. The broadcast packet contains the IP address to be queried. All hosts that receive the broadcast packet query their OWN IP address. An ARP packet containing its MAC address is prepared and sent to the host that sends the ARP broadcast.

After receiving ARP packets, the broadcast host updates its ARP cache (that is, the place where the IP-MAC mapping table is stored). The host sending the broadcast will use the new ARP cache data to prepare the data link layer for sending packets.

The RARP protocol works in the opposite direction.

3. ICMP

The IP protocol is not a reliable protocol, it does not guarantee that the data will be delivered, so naturally, the job of ensuring that the data will be delivered should be done by other modules. One important module is the ICMP(Network Control Message) protocol. ICMP is not a high-level protocol, but an IP layer protocol.

An error occurred while sending IP packets. If the host is unreachable, the route is unreachable, and so on, ICMP packets the error message and sends it back to the host. Give hosts a chance to handle errors, which is why protocols built above the IP layer are likely to be secure.

Four, ping

Ping is arguably the best-known use of ICMP and is part of the TCP/IP protocol. You can run the ping command to check whether the network is normal, which helps you analyze and determine network faults.

For example: when one of our websites is not available. I usually ping the site. Ping will return some useful information. General information is as follows:

The word ping comes from sonar, and that’s exactly what this program does: it uses ICMP packets to detect whether another host is reachable. The principle is that the host sends an ICMP request with the type code 0 and responds with an ICMP request with the type code 8.

Five, the Traceroute

Traceroute is an important and convenient tool for detecting routes between hosts and destination hosts.

The principle of Traceroute is very interesting. After receiving the IP address of the destination host, it first sends a UDP packet with TTL=1 to the destination host. After the first router receives the packet, it automatically reduces the TTL by 1. At the same time, a host unreachable ICMP datagram is sent to the host. After receiving this packet, the host sends a UDP packet with TTL=2 to the destination host, and then stimulates the second router to send an ICMP packet to the host. Repeat until you reach the destination host. Traceroute then gets all the router IP addresses.

Six, TCP/UDP

TCP and UDP are transport layer protocols, but they have different features and application scenarios. The following table compares and analyzes TCP and UDP.

For a message

In the packet-oriented transmission mode, the application layer sends UDP packets as long as they are received. That is, UDP sends one packet at a time. Therefore, the application must select a message of the appropriate size. If the packet length is too long, the IP layer needs to fragment the packet, which reduces the efficiency. If it is too short, the IP address is too small.

Word oriented stream

For byte streams, TCP treats the application as a series of unstructured byte streams, although the application interacts with TCP one data block at a time (of varying sizes). TCP has a buffer, and when an application sends a block of data that is too long, TCP can cut it up and send it back.

About congestion control, flow control, is the focus of TCP, explained later.

Some applications of TCP and UDP

When should YOU use TCP?

When the network communication quality is required, for example, the entire data must be accurately transmitted to the peer party. This is used for reliable applications, such as HTTP, HTTPS, and FTP, and mail transmission protocols such as POP and SMTP.

When should YOU use UDP?

When the network communication quality is not high and the network communication speed is required, UDP can be used.

Seven, DNS

The Domain Name System (DNS) is a distributed database that maps Domain names and IP addresses on the Internet. It enables users to access the Internet more conveniently without having to remember IP numbers that can be read directly by machines. The process of obtaining an IP address from a host name is called domain name resolution (or hostname resolution). DNS runs on TOP of UDP and uses port 53.

Establishment and termination of TCP connections

1. Three handshakes

TCP is connection-oriented. Before either party sends data to the other, a connection must be established between the two parties. In TCP/IP, TCP provides a reliable connection service that is initialized with a three-way handshake. The purpose of the three-way handshake is to synchronize the serial numbers and confirmation numbers of the two connected parties and exchange TCP window size information.

First handshake: Establish a connection. The client sends a connection request packet segment and sets the SYN position to 1 and Sequence Number to X. Then, the client enters the SYN_SEND state and waits for confirmation from the server.

Second handshake: The server receives a SYN packet segment. Context The server should acknowledge the SYN segment received from the client, and set this Acknowledgment Number to X +1(Sequence Number+1). Set the SYN position to 1 and Sequence Number to y. The server puts all the above information into a packet segment (namely, SYN+ACK packet segment) and sends it to the client. At this time, the server enters the SYN_RECV state.

Third handshake: The client receives a SYN+ACK packet from the server. A. Then set this Acknowledgment Number to Y +1 and send an ACK segment to the server. After this segment is sent, both the client and the server enter the ESTABLISHED state and complete the TCP three-way handshake.

Why three handshakes?

An error occurs in case an invalid connection request segment is suddenly sent to the server.

An example is as follows: Invalid connection request packet segment Is generated in this case: The first connection request packet segment sent by the client is not lost, but is detained on a network node for a long time. As a result, the packet reaches the server at a certain time after the connection is released. Originally, this is an invalid packet segment. However, after the server receives the invalid connection request packet segment, it mistakenly thinks it is a new connection request sent by the client.

Then the client sends a confirmation message to agree to establish a connection. Assuming that the “three-way handshake” is not used, a new connection is established as soon as the server sends an acknowledgement. Since the client does not send a connection request, it ignores the server’s confirmation and does not send data to the server. However, the server assumes that the new transport connection has been established and waits for data from the client. As a result, many of the server’s resources are wasted. The three-way handshake prevents this from happening. For example, the client does not issue an acknowledgement to the server’s acknowledgement. When the server receives no acknowledgement, it knows that the client has not requested a connection.”

2. Wave four times

When a client establishes a TCP connection with a server through a three-way handshake, it is necessary to disconnect the TCP connection when the data transfer is complete. For TCP disconnections, there is the mysterious “four breakups”.

First breakup: Set the Sequence Number for host 1 (either the client or server) to send a FIN packet to host 2. At this point, host 1 enters the FIN_WAIT_1 state. This means that host 1 has no data to send to host 2.

Second break off: Host 2 receives the FIN segment from host 1 and sends an ACK segment back to Host 1. This Acknowledgment Number is set to Sequence Number plus 1. Host 1 enters the FIN_WAIT_2 state. Host 2 tells host 1 that I “agree” to your shutdown request; Third breakup: Host 2 sends a FIN packet to host 1 to close the connection, and host 2 enters the LAST_ACK state.

For the fourth time, host 1 receives the FIN packet from host 2 and sends an ACK packet to host 2. Then host 1 enters the TIME_WAIT state. Host 2 closes the connection after receiving the ACK packet from host 1. If host 1 does not receive a reply after waiting for 2MSL, then the Server is shut down.

Why break up four times?

TCP is a connection-oriented, reliable, byte stream – based transport-layer communication protocol. TCP is in full-duplex mode. When host 1 sends a FIN packet, it only indicates that host 1 has no data to send. Host 1 tells host 2 that all data has been sent. However, at this point, host 1 can still accept data from host 2; When host 2 returns an ACK packet, it indicates that it knows that host 1 has no data to send, but host 2 can send data to host 1. When host 2 also sends a FIN packet segment, host 2 also has no data to send, and tells host 1 that it has no data to send, and both parties happily break the TCP connection.

Why wait for 2MSL?

MSL: The maximum lifetime of a packet segment, which is the maximum time that any packet segment is in the network before being discarded. There are two reasons:

  • Ensure that TCP full-duplex connections can be closed reliably
  • Ensure that duplicate data segments are removed from the network

First point: If host 1 is directly CLOSED, host 2 does not receive the final ACK from host 1 due to the unreliability of IP protocol or other network reasons. Then host 2 will continue to send the FIN after the timeout. In this case, because host 1 is already CLOSED, the connection corresponding to the resending FIN cannot be found. Therefore, host 1 does not directly enter CLOSED, but holds TIME_WAIT to ensure that the host receives an ACK when it receives a FIN again and closes the connection correctly.

Second point: if host 1 closes directly and then initiates a new connection to host 2, we cannot guarantee that the new connection will have a different port number from the one just CLOSED. This means that the new connection may have the same port number as the old connection. In general, there is no problem, but there are some special cases: Suppose a new connection and has closed the old connection port number is the same, if the previous connection remain some of the data in the network, these delays data after establishing new connections to host 2, due to the new and old connections port number is the same, the TCP protocol is considered the delay data belongs to a new connection, This will confuse the packet with the actual new connection. Therefore, the TCP connection must wait in TIME_WAIT state for 2x MSL to ensure that all data of the connection disappears from the network.

TCP flow control

If the sender sends data too fast, the receiver may not have time to receive it, which can result in data loss. The so-called flow control is to let the sender send rate is not too fast, to let the receiver in time to receive.

Using the sliding window mechanism, it is very convenient to control the flow of the sender on the TCP connection.

Let A send data to B. When the connection is established, B tells A: “My receive window is RWND = 400” (where RWND stands for Receiver window). Therefore, the sender’s send window cannot exceed the value of the receive window given by the receiver. Note that TCP’s window units are bytes, not message segments. Assume that each packet segment is 100 bytes long and the initial sequence number of the data packet segment is set to 1. Uppercase ACK indicates the ACK bit in the header, and lowercase ACK indicates the ACK value of the acknowledgement field.

As can be seen from the figure, B has carried out flow control for three times. The first time the window is reduced to RWND = 300, the second time to RWND = 100, and finally to RWND = 0, that is, the sender is not allowed to send any more data. The sender pauses until host B reissues a new window value. ACK=1 is set for the three packet segments that B sends to A. The acknowledgement number field is meaningful only when ACK=1.

TCP has a persistence timer for each connection. As soon as one side of the TCP connection receives a zero window notification from the other, a persistent timer is started. If the duration of the timer expires, a zero window control message segment (with 1 byte of data) is sent, and the party receiving this message segment resets the duration timer.

TCP congestion control

The sender maintains a state variable of congestion window CWND (Congestion Window). The size of the congestion window depends on the level of congestion on the network and changes dynamically. The sender makes its send window equal to the congestion window.

The principle for the sender to control the congestion window is: as long as there is no congestion on the network, the congestion window will be larger to send more packets. But whenever there is congestion on the network, the congestion window is reduced to reduce the number of packets injected into the network.

Slow start algorithm:

When the host starts sending data, if a large number of data bytes are immediately injected into the network, it may cause congestion because it is not clear how overloaded the network is. Therefore, the better method is to first detect, that is, gradually increase the sending window from small to large, that is, gradually increase the value of congestion window from small to large.

Usually, the congestion window CWND is set to a maximum MSS value at the beginning of sending a packet segment. After each acknowledgement of a new message segment is received, the congestion window is increased by at most one MSS value. Gradually increasing the congestion window CWND of the sender in this way can make the packet injection rate more reasonable.

The congestion window CWND doubles with each transmission turn. The time experienced by a transmission cycle is actually the round trip time RTT. However, “Transmission rounds” emphasizes more: all segments of the message allowed by the congestion window CWND are sent consecutively, and an acknowledgement of the last byte sent is received.

In addition, the slow start “slow” does not mean that the CWND growth rate is slow. Rather, it means that the CWND is set to 1 when TCP starts sending packet segments, so that the sender sends only one packet segment at the beginning (to test the network congestion), and then gradually increases the CWND.

To prevent network congestion caused by excessive CWND growth of congestion window, a slow start threshold SSthRESH status variable needs to be set. The slow start threshold ssthresh is used as follows:

  • When CWND < SSTHRESH, use the slow start algorithm described above.
  • When CWND > SSTHRESH, stop using the slow-start algorithm and use the congestion avoidance algorithm instead.
  • When CWND = SSTHRESH, either the slow start algorithm or the congestion control avoidance algorithm can be used. Congestion avoidance

Congestion avoidance

Let the congestion window CWND grow slowly, that is, every round trip time RTT increases the sender’s congestion window CWND by 1 instead of doubling it. The congestion window CWND grows slowly in a linear way, which is much slower than the congestion window growth rate of the slow-start algorithm.

The slow start threshold ssthRESH should be set to half (but not less than 2) of the value of the sender window when congestion occurs, as long as the sender determines that the network is congested, whether in the slow start phase or in the congestion avoidance phase. Then the congestion window CWND is reset to 1 and the slow start algorithm is performed.

The goal is to quickly reduce the number of packets sent to the network by the host so that the congested router has enough time to clear the backlog of packets in the queue.

In the figure below, the process of congestion control mentioned above is illustrated by specific numerical values. The send window is now the same size as the congestion window.

2, fast retransmission and fast recovery

Fast retransmission

The fast retransmission algorithm first requires the receiver to send repeated acknowledgement immediately after receiving a segment out of order (so that the sender can know that a segment has not reached the other party in time) rather than wait until the data is sent before carrying on the acknowledgement.

The recipient sent confirmation after receiving M1 and M2 respectively. Now assume that the receiver does not receive M3 but then receives M4.

Obviously, the receiver cannot acknowledge M4 because M4 is the received out-of-order message segment. According to the principle of reliable transmission, the receiver can either do nothing or send an acknowledgement of M2 at an appropriate time.

However, according to the provisions of the fast retransmission algorithm, the receiver should send the repeated acknowledgement of M2 in time, so that the sender can know in advance that the packet segment M3 does not reach the receiver. The sender then sent the M5 and M6. After receiving these two packets, the receiver also sends a duplicate acknowledgement to M2. In this way, the sender has received four confirmations of M2 from the receiver, the last three of which are repeated confirmations.

The fast retransmission algorithm also stipulates that the sender should immediately retransmit the unreceived message segment M3 as long as it receives three consecutive repeated acknowledgements, instead of waiting for the expiration of the retransmission timer set by M3.

Because the sender retransmits the unacknowledged segment as soon as possible, fast retransmission can improve the overall network throughput by about 20%.

Fast recovery

The fast recovery algorithm is used in conjunction with fast retransmission. The process has the following two points:

  • When the sender receives three consecutive repeated acknowledgements, the multiplication reduction algorithm is performed to halve the slow start threshold SSTHRESH.
  • The difference from slow start is that instead of executing the slow start algorithm (i.e. the congestion window CWND is now not set to 1), instead, the CWND value is set to the value halved after the slow start threshold SSTHRESH, and then the congestion avoidance algorithm (” add up “) is started, making the congestion window slowly and linearly increase.

To learn basic Java technology, you can come to my: Java learning garden.