Differences between TCP and UDP

1. TCP can ensure reliable data transmission, while UDP cannot

TCP has flow control, UDP does not have flow control

3. TCP has congestion control, but UDP does not

4. TCP connection-oriented and UDP connectionless

5, UDP support one-to-one, one-to-many, many-to-many, TCP support one-to-one

UDP directly adds the header of the application-layer packet and then transmits the packet downwards. TCP is byte throttle-oriented. If the application-layer packet bytes are too large, the PACKET can be split and sent; if the application-layer packet bytes are too small, the packet can be sent by waiting for enough bytes.

7, TCP header minimum 20 bytes, UDP header only 8 bytes.

How does TCP ensure reliable data transmission

TCP ensures reliable data transmission through timeout retransmission and fast retransmission.

If the data sent by the sender is not acknowledged within a specified period of time, the data is resend. This is the timeout retransmission mechanism. The key is how to determine the specified time. If the time is too short, the sender will often retransmit the same data, which may cause a waste of network resources. If the time is too long, the receiver cannot receive the data all the time, which will cause a decline in communication efficiency. For timeouts we usually estimate the round-trip time plus a fourfold deviation.

The estimated round-trip time is calculated using an exponentially weighted moving average, that is, the estimated round-trip time is now multiplied by the previous EstimatedRTT (estimated round-trip time) times (1-α) plus α times the current SampleRTT (sample round-trip time). The recommended value of α is 0.125.


E s t i m a t e d R T T = ( 1 Alpha. ) E s t i m a t e d R T T + Alpha. S a m p l e R T T EstimatedRTT = (1-α)·EstimatedRTT + α·SampleRTT

The deviation value is calculated by measuring the difference of RTT (round-trip delay) to estimate the degree of SampleRTT offset EstimatedRTT.


D e v R T T = ( 1 Beta. ) D e v R T T + Beta. S a m p l e R T T E s t i m a t e d R T T DevRTT = (1 – beta) · DevRTT + beta · | SampleRTT – EstimatedRTT |

The resulting timeout interval is the following formula.


T i m e o u t I n t e r v a l = E s t i m a t e d R T T + 4 D e v R T T TimeoutInterval = EstimatedRTT + 4 · DevRTT

What about the fast retransmission mechanism? TCP is a pipeline (lines) protocol, which can send multiple requests, at the same time send request received by the receiver, the receiver will respond to a current window edge value ACK, if at the same time sent three same ACK, so the sender directly to ACK data retransmission, don’t have to wait until the trigger retransmission timeout mechanism.

TCP is a pipeline protocol. Pipeline protocol has two implementation mechanisms, namely Go back N and SR. Go back N is used when the sending window is greater than 1 and the receiving window is equal to 1, and SR is used when the receiving window and sending window are both greater than 1. Our TCP combines the two. For example, there is only one timeout retransmission timer in TCP, which is set to the earliest packet segment that has been sent but not confirmed. This is similar to Go back N, and the ACK returned by the receiver after receiving the packet segment reaches the largest ACK sequence. Then only one segment is re-sent, similar to SR.

TCP flow control

In THE TCP protocol, the two parties send packets to each other based on the receiving capacity of the other party. If the other party fails to receive packets and continues to send packets, data overflow will occur. Therefore, you need to tell the other party the size of the receiving cache in the sending phase. TCP Has an accept window field in the TCP header that indicates the size of the data that the receiver can accept.

TCP congestion control

Before we talk about congestion control, let’s see what congestion control means. If there is no congestion control, communication of both sides will send message segment with maximum speed, the communication link congestion, congestion will lead to increased delay and packet presented.according to increase, so the TCP retransmission mechanism is triggered, very likely will have been sent the message before paragraph again to send, so that the effective information communication link the proportion will be less and less, There will be more and more repeats of previously sent messages. So we need congestion control to solve this problem.

There are two types of congestion control, end-to-end congestion control and network-assisted congestion control. End-to-end congestion control is carried out by the sender and receiver to control the transmission rate. Network-assisted congestion control is carried out by the router to tell the sender the maximum transmission rate that it can receive. Network-assisted congestion control is used in ATM architecture. TCP IP uses end-to-end congestion control.

Congestion control is to avoid congestion, and the communication process is for information transmission at the fastest speed, these two are contradictory, so TCP congestion control is divided into two stages, one stage is slow start stage, the other stage is congestion avoidance stage. The slow start phase doubles the value of the congestion window for each ACK received in order to quickly increase the sending rate to the threshold. Congestion avoidance phase, the value of the congestion window increases by one for each ACK received.

The congestion window increases in the above two ways, but how does it decrease? The decrease of the congestion window triggered by timeout retransmission and fast retransmission, whenever trigger retransmission timeout, the congestion window into 1 MSS, threshold value (value) by the slow start to congestion avoidance stage before a half of the congestion window, enter the stage of slow start again, once reached the threshold, entered the stage of congestion avoidance, linear growth began. If triggered by fast retransmission, the threshold becomes half the value of the current window, and the congested window is the threshold plus three (three redundant ACKS received).

TCP’s congestion control makes TCP fair. Multiple TCP connections transmitted over a link will share the bandwidth equally because they grow linearly and multiply less.

TCP is connection-oriented and UDP is connectionless

TCP is connection-oriented and goes through the three-handshake, four-wave phase, which I summarized in another article.

UDP is connectionless. Compared with TCP, UDP has no connection delay and can transmit data faster. UDP is suitable for applications with low requirements on integrity and delay, such as IP telephony and video conferencing.

TCP supports 1 to 1, UDP supports 1 to 1, 1 to many, and many-to-many

This is because UDP data transfers do not establish connections, so messages can be sent to multiple recipients without the need to maintain connection state.

UDP is faster than TCP

1. UDP does not have congestion control and flow control.

2. The UDP packet header is small.

UDP does not need to establish a connection.

Reference: Top-down Approach to Computer Networks