Computer network architecture

  • Application layer: Through the interaction between application processes to complete a specific network application. Application layer protocols define rules for communication and interaction between application processes. The data units that the application layer interacts with become packets. Common application layer protocols include HTTP, DNS, FTP, and SMTP.

  • Transport layer: The transport layer is responsible for providing a common data transfer service for communication between processes in two hosts. The transport layer mainly uses the following two protocols:

    TCP: provides the connection-oriented and reliable data transmission service. The unit of data transmission is packet segment.

    UDP: provides a connectionless, best-effort data transfer service in the form of user datagrams

  • Network layer: select suitable internetwork routing and switching nodes to provide communication services for different hosts on packet switching network. When sending data, the network layer encapsulates the packet segments or user datagrams generated by the transport layer into packets and packets for transmission

    The network layer only provides simple, flexible, connectionless, best-effort datagram services up. Internet protocol IP is one of the most important protocols in TCP/IP system, and ARP, ICMP, IGMP are used with it.

  • Data link layer: Transmits data segment by segment between two hosts using a dedicated link layer protocol. The data link layer assembles the IP datagrams handed over by the network layer into frames (plus frame header and frame tail) and transmits them over the link between two adjacent nodes. Each frame contains data and necessary control information (such as synchronization information, address information, error control, etc.).

    The data link layer uses CRC checks to transmit unique errors. For wired links with good communication quality, the link layer protocol does not use the confirmation and retransmission mechanism, and the task of error correction is carried out by the upper-layer protocol. For wireless links with poor communication quality, the data link layer uses acknowledgement and retransmission mechanisms to provide reliable transmission services upward.

  • Physical layer: Data transferred at the physical layer is in bits. It realizes transparent transmission of bitstreams between adjacent computer nodes and shields the differences between specific transmission media and physical devices as far as possible

Data transfer process

Application process AP1 of host 1 sends data to application process AP2 of host 2:

  • AP1 first passes its data to layer 5 (application layer) of the local host. Layer 5 with the necessary control information H5 becomes the data unit of the next layer.
  • The fourth layer (transport layer) receives the data unit, adds the control information H4 of this layer, and then gives it to the third layer (network layer) to become the data unit of the third layer.

And so on. However, after the second layer (data link layer), the control information is divided into two parts and added to the head (H2) and tail (T2) of the data unit of this layer respectively. The first layer (physical layer) is a bit stream, so no control information is added. Note that the bitstream should be transmitted from the head.

When the stream leaves the physical layer of host 1 and travels through the network’s physical media (the transport channel) to the router, it rises from layer 1 (the physical layer) to layer 3 (the network layer). Each layer performs the necessary operations based on the control information, then strips the control information away and passes the remaining data units of the layer to the higher layer. When the packet reaches layer 3, it looks up the routing table in the router according to the destination address in the header, finds the interface to forward the packet, and then sends the packet down to layer 2 (link layer). After adding the new header and tail, it goes to layer 1 at the bottom, and then sends out every bit on the physical media.

When the stream leaves the router and arrives at host 2, it goes up from layer 1 of host 2 to layer 5 in the same way described above. Finally, the data sent by application process AP1 is sent to application process AP2 of the destination station.

TCP,

Three-way handshake

For the first time,

When establishing a connection, the client sends a SYN packet (SEq = X) to the server and enters the SYN_SENT state. SYN: Indicates the Synchronize Sequence number.

The second time

After receiving a SYN packet, the server must acknowledge the syn (ACK = X +1) from the customer and send a SYN packet (SEq = Y). In this case, the server enters the SYN_RECV state.

The third time

After receiving the SYN+ACK packet from the server, the client sends an ACK packet (ACK = Y +1) to the server. After the packet is sent, the client and the server enter the ESTABLISHED state (TCP connection is successful) and complete the three-way handshake.

Four times to wave

For an established connection, TCP releases the connection using an improved four-way handshake (using a message segment with a FIN tag attached). The procedure for TCP to close the connection is as follows:

For the first time,

When host A’s application notifies TCP that the data has been sent, TCP sends host B A packet segment with the FIN tag (FIN stands for Finish).

The second time

After receiving the FIN packet, host B does not immediately reply to host A with the FIN packet. Instead, host B sends an ACK to host A and notifies the application that the peer party wants to close the connection. The ACK is sent to prevent the peer party from retransmitting the FIN packet during this period.

The third time

Host B’s application tells TCP: I want to close the connection completely, and TCP sends A FIN segment to host A.

For the fourth time

After receiving the FIN packet, host A sends an ACK to host B to indicate that the connection is released.

How does TCP ensure reliable transmission

ARQ protocol

Automatic rest-Request (ARQ), which enables reliable information transfer over unreliable services by using both acknowledgement and timeout mechanisms. If the sender does not receive an acknowledgement frame within a period of time after sending it, it usually resends it. The recipient will also send an acknowledgement frame if it receives a duplicate packet. ARQ includes stop wait ARQ and continuous ARQ.

Stop-wait protocol: After each packet is sent, the packet is stopped waiting for the confirmation of the other party. Send the next request after receiving confirmation.

Continuous ARQ protocol: The sender maintains a sending window, and the packets in the sending window can be sent continuously without waiting for the confirmation of the other party. The receiver generally uses cumulative acknowledgements, sending acknowledgements to the last packet arriving in sequence, indicating that all packets up to that point have been correctly received.

TCP traffic control

The so-called flow control, is to let the sender send rate is not too fast, to let the receiver in time to accept. TCP uses sliding Windows for flow control.

Persistent timer: To resolve deadlock situations where non-zero window notifications are lost during transmission, resulting in mutual waits. As soon as one side of the TCP connection receives a zero window notification from the other, a persistent timer is started. If the duration timer expires, a zero window probe segment (with only 1 byte of data) is sent, and the current window value is given when the probe segment is acknowledged.

TCP congestion control

Congestion control is designed to prevent too much data from being injected into the network so that routers or links in the network are not overloaded. Congestion control must be done on the premise that the network can withstand the existing network load. Congestion control is a global process that involves all hosts, all routers, and all factors associated with reducing network traffic performance. The four algorithms for congestion control are as follows:

  • Slow start: The idea behind the slow start algorithm is that when the host starts sending data, if a large number of data bytes are immediately injected into the network, it may cause network congestion because network compliance is not yet known. Experience shows that the better method is to first detect, that is, gradually increase the sending window from small to large, that is, gradually increase the value of congestion window from small to large. The initial value of CWND is 1, and the CWND is doubled after each transmission cycle.

  • Congestion avoidance: The idea of the congestion avoidance algorithm is to make the CWND of the congestion window increase slowly, that is, every round trip time RTT will add 1 to the CWND sent.

  • Fast retransmit and Recovery: In TCP/IP, fast Retransmit and Recovery (FRR) is a congestion control algorithm that can quickly recover lost data packets. Without FRR, TCP will use a timer to suspend the transmission if the packet is lost. During this pause, no new or duplicated packets are sent. With FRR, if the receiver receives a piece of data out of order, it immediately sends a repeat acknowledgement to the transmitter. If the transmitter receives three duplicate acknowledgements, it immediately retransmits these missing data segments. With FRR, there is no delay due to the pause required on retransmission. Fast retransmission and Recovery (FRR) works best when a single packet is lost. It does not work very effectively when multiple data packets are lost over a short period of time.

TCP’s finite state machine

reference

  • Illustrated HTTP
  • The Seventh Edition of Computer Networks (xie Xiren)
  • snailclimb.gitee.io/javaguide