Moment For Technology

Network: FRIENDS interview TCP/IP, go back and wait for the notification

Posted on Dec. 3, 2022, 9:15 a.m. by Kerry Rees
Category: The back-end Tag: The back-end


Recently I talked with a classmate who wanted to change his job and went to a big factory. At that time, he wrote on his resume that he was proficient in TCP/IP, and the idea was that he knew a little bit about THE TCP protocol and the interviewer would not ask too much, so he wrote proficient. I didn't expect that. I was careless

Pay attention to the public number, communicate together, wechat search search: stealth forward

The opening

My friend made an appointment for the interview at 10:30, so I arrived ten minutes early. Then I sat on the sofa quietly and waited, while I remembered what I had read before. Just before 10:30, a tall, thin man in a plaid shirt opened the door and walked in. "Hello, let's get started," he said. "My friend smiled politely and said" Yes."

Interviewer: It is said in your resume that you are proficient in TCP and IP. Then let's discuss the network model and TCP and IP protocols. Please explain your understanding first

  • Friend (how to ask TCP at the beginning, do not follow the routine play cards, shouldn't ask Java foundation? But the general question, I'm ok)

  • Friend: The network model is generally divided into seven layers: the application layer, the presentation layer, the session layer, the transport layer, the network layer, the data link layer and the physical layer. Protocols at the application layer include HTTP, FTP, and SMTP. TCP belongs to the transport layer, and IP belongs to the network layer
  • Friend: The TCP/IP network model is layered from top to bottom, and each layer corresponds to a different protocol parsing. Let me draw a picture

Interviewer: Look at the diagram you drew. TCP has its own header structure. What fields are there

  • Friend (what the hell! When I baidu dictionary, how can I remember this? Wait, I think I saw it last night.

  • Friend: Go ahead and draw a picture, straight point

  • Friend: The TCP header structure starts with the 16-bit source and destination port numbers, followed by the 32-bit sequence number and acknowledgment number. This is followed by the header length of 4 bits and 6 reserved bits and 6 flag bits
  • Friends: The 16-bit properties have window size (which controls the sending window), checksum (which verifies that the data segment has not been modified), and emergency pointer. Finally, there are options, whose length is determined by the header length
  • Friend: The serial number is a numeric number of a TCP segment. To ensure a reliable TCP connection, the serial number is added to each segment sent. When a connection is established, an initial serial number is randomly generated on both ends. The acknowledgment number is used in conjunction with the serial number. When answering a request, an acknowledgment number is returned. Its value is equal to the serial number of the request plus 1
  • Friend: and the six flag bits are URG: this is an emergency message, ACK: reply message, PSH: Buffer not yet full, RST: reset connection, SYN: connection established flag, and FIN: connection closed notification
  • Friend: The window size is used by the receiver to control the sliding window size of the sender

Interviewer: What's the difference between TCP and UDP

  • My friend (sighs)
  • Friends: 1) Connection aspect :TCP connection-oriented. UDP is connectionless and does not require a connection to be established before sending data
  • Friend: 2) Security :TCP provides reliable service, to ensure that the transmitted data, no error, no loss, no repetition, and in order to arrive. UDP is the best delivery and does not guarantee reliable delivery
  • Friend: 3) Transmission efficiency: TCP transmission efficiency is relatively low, UDP transmission efficiency is high

Interviewer: Just now you said TCP is a reliable connection, how is it implemented

  • Friends: TCP connections are based on a three-way handshake and disconnections are based on a four-way wave
  • Friend: In order to protect data from loss and error (reliability), it has the mechanism of packet verification, ACK reply, timeout retransmission (sender), dissequence data retransmission (receiver), discard duplicate data, flow control (sliding window), congestion control and so on

Interviewer: To be more specific, talk about the three-handshake and four-wave mechanism

  • Friend (again regular question, bask in the water)
  • Friend: TCP is a reliable two-way channel, so you need three handshakes and four waves. Let me draw a picture
  • Three-way handshake

  • Four times to wave

  • Friend: Quick answer in advance. It takes four waves to close the connection, one more than to establish the connection, because the passive close may still have data not sent out, and it cannot be the same as the handshake. The second handshake is both the initiate and the response handshake

Interviewer: What's wrong with not having three handshakes

  • Friend: If there are only two handshakes, the client does not ACK the server with the SYN after sending a connection request
  • Friend: If the client fails to establish a connection due to its own reasons, it may establish TCP connections repeatedly. However, the server may think that the TCP connections discarded by the client are still valid, which will waste resources

Interviewer: What's the difference between TIME_WAIT and CLOSE_WAIT

  • Friends: CLOSE_WAIT is passively closed; When the peer party closes the socket and sends a FIN packet, it responds to ACK and enters the CLOSE_WAIT state. Then check whether no data is transmitted. If no data is transmitted, the system sends a FIN packet to the peer end, enters the LAST_ACK state, and waits for the incoming ACK packet
  • Friends: "TIME_WAIT" is used to close the connection. When the device is in FIN_WAIT_2 state, the device enters the TIME_WAIT state after receiving a FIN packet from the peer device. Then wait for two more MSLS (Maximum Segment Lifetime).

Interviewer: What about TIME_WAIT and why the state time should be maintained at two MSLS

  • Friend (that's asking too much, brother. Fortunately, I secretly made up lessons yesterday.)
  • Friends: 1) TIME_WAIT is used to ensure that the last waved ACK message can be sent to the recipient. If the RECIPIENT loses the ACK, the recipient will send a FIN after timeout. If TIME_WAIT is not in the wait state and the FIN packet is directly disabled, the retransmitted FIN packet is responded with an RST packet. The RST packet is parsed as an error on the disabled end
  • Friends: 2) There are two connections, the first connection is closed, and a second identical connection is immediately established; If the lost packet from the first connection arrives, it will interfere with the second connection. Waiting for two MSLS allows the data from the last connection to disappear behind the network

Interviewer: Just now you also mentioned congestion control, how does TCP protocol solve congestion

  • Friend: The first is slow start and congestion avoidance
  • Friends: 1) With slow start, TCP senders will maintain congestionWindow, which is also called CWND. The initial size of a congestion window is one packet segment. The size of the congestion window doubles after each RTT (the time for data to be fully sent to be acknowledged) (exponential growth, but slow in the early stage).
  • Friends: 2) Congestion avoidance, the idea is to allow the CWND window to slowly increase, after the sender's CWND has reached the ssthresh threshold (initial value determined by the system), the CWND window will increase by one after each RTT, instead of doubling (received two or four confirmations, both are CWND +1), the CWND will increase linearly (addition increases).
  • Friend :(draw a picture for interpretation)

  • Friend: If you encounter network congestion, the congestion window threshold SSthRESH is halved, CWND is set to 1, and the slow start phase is re-entered

Interviewer: What are the other methods of congestion control

  • Friend: quick retransmission and quick recovery
  • Friends: 1) Fast retransmission means that when the receiver receives a disordered message, it immediately reports it to the sender for retransmission
  • Friend: If the receiver M1 receives the M1, M2 does not, and then M3, M4 and M5 are sent again, then the receiver sends a total of three CONSECUTIVE M1 confirmation messages to the sender. According to the stipulations of fast retransmission, as long as the sender receives three consecutive repeated acknowledgements, it immediately retransmits the M2 (the packet after the repeated acknowledgements) sent by the other party.

  • Friends: 2) Get well
  • Friend: When the sender receives three consecutive repeat confirmations, ssthresh is halved; Because the sender may think that the network is now free of congestion, instead of starting slowly, set the CWND value to the value after ssthresh was halved, and then perform the congestion avoidance algorithm, CWND increases linearly
  • Friend :(another picture)

Interviewer: Do you know the sliding window, and how does the process of client and server controlling the sliding window work

  • Friend: The receiver puts the size of the buffer that it can receive into the "window size" field in the TCP header to notify the sender through ACK packets. The receiver uses the sliding window to control the size of the data sent by the sender, so as to achieve flow control
  • Friend: In fact, the upper limit of the sender window is the minimum value of both the congestion window and the sliding window

Interviewer: Do you know the difference between a sliding window and a congestion window

  • Friend: The same point is to control the loss of packets, the implementation mechanism is to let the sender send a little slower
  • Friend: The difference lies in the object of control
  • Friend: 1) The object of flow control is the receiver, in case the sender sends too fast for the receiver to handle
  • Friends: 2) The object of congestion control is the network. It is afraid that the sender sends too fast, causing network congestion and making the network unable to deal with it

Interviewer: What do you think about the sticking and unpacking of TCP

  • Friend: The program needs to send the data Size and the TCP Segment can send MSS (Maximum Segment Size) is different
  • Friend: When the value is greater than MSS, the program data needs to be split into multiple TCP segments, which is called unpacking. If the value is less than or equal to, multiple program data will be merged into a TCP packet segment, which is a sticky packet. In the command, MSS = TCP segment length -TCP header length
  • Friend: At the IP protocol layer, link layer, and physical layer, packets are unbundled and glued

Interviewer: What are the methods to solve the sticking and unpacking?

  • Friends: 1) Add special characters to the end of the data for segmentation
  • Friends: 2) Set the data to a fixed size
  • Friends: 3) Divide the data into two parts, one is the head, the other is the content body; The header structure has a fixed size and a field declares the size of the content body

Interviewer: What about SYN Flood

  • Friend: SYN Flood A forged SYN packet initiates a connection to the server. After receiving the packet, the server replies with a SYN_ACK packet. After the reply is sent, the server does not receive an ACK packet, causing a half-and-half connection
  • Friend: If an attacker sends a large number of such packets, a large number of half-connections appear on the attacked host, depleting its resources and preventing normal users from accessing the host until the half-connection times out

Interviewer: Good command of TCP, let's ask about HTTP. Do you know how many steps a program typically goes through for an HTTP request?

  • Friends: 1) resolve the domain name - 2) initiate a TCP three-way handshake to establish a connection - 3) initiate an HTTP request based on TCP - 4) The server responds to the HTTP request and returns data - 5) The client parses and returns data

Interviewer: What are the HTTP response status codes? Name a few that you are familiar with

  • Friend: There are probably the following kinds
    • 200: the request is successful
    • 400: the semantics are incorrect. Generally, the request format is incorrect
    • 401: User authentication permission is required. Generally, the certificate token fails authentication
    • 403: Service is denied
    • 404: The resource does not exist
    • 500: Server error
    • 503: Server temporary maintenance, overload; recoverable

Interviewer: Good, again, what's the difference between a session and a cookie

  • Friends: 1) Stored in different locations, cookies are stored in the client data; Session data is stored on the server
  • Friends: 2) The storage capacity is different, the data saved by a single cookie is small, a site can save up to 20 cookies; There is no upper limit for sessions
  • Friends: 3) The storage method is different, only ASCII string can be stored in cookie; Any type of data can be stored in a session
  • Friends: 4) Different privacy policies, cookies are visible to the client; Sessions are stored on the server and are transparent to the client
  • Friends: 5) the expiration date is different, cookies can be valid for a long time; A session relies on a cookie named JSESSIONID. By default, the session expires at -1 and will expire simply by closing the window
  • Friends: 6) cross-domain support is different, cookie support cross-domain access; The session does not support cross-domain access

Interviewer: Good. Do you know what HTTP chunking is

  • Friend: Chunking is an HTTP transport mechanism that allows a server to send data to a client in multiple parts. This protocol is provided in HTTP/1.1

Interviewer: What are the benefits of HTTP chunking

  • Friends: HTTP block transfer encoding allows the server to maintain HTTP persistent connections for dynamically generated content
  • Friends: Block transfer encoding allows the server to send header fields at the end. This is important in cases where the header field value is not known before the content is generated, such as when the content of the message is hashed
  • Friends: HTTP servers sometimes use compression (gZIP or deflate) to reduce the time it takes to transmit. Block transfer encoding can be used to separate parts of a compressed object. In this case, the blocks are not compressed individually, but the entire load is compressed. Block coding makes it easier to compress and send data at the same time

Interviewer: How do you understand the long link of HTTP

  • Friends: A persistent connection is one that persists after a TCP connection is established between a client and a service. It is not closed for an HTTP request and is used for subsequent requests
  • Friend: A persistent connection can save the establishment and closing of TCP. It is suitable for clients with frequent requests. However, it is recommended to use malicious persistent connection to cause service damage (it is recommended to use it between internal services).

Interviewer: Is HTTP secure? How to secure HTTP transmission

  • Friends: It is not safe. The data transmitted through HTTP is in plain text and is easy to be intercepted by third parties. To securely transfer data, you can use the HTTPS protocol, an updated version of HTTP

Interviewer: How do you understand the difference between HTTPS and HTTP

  • Friends: 1) HTTP connections are stateless and are transmitted in plain text
  • Friends: 2) HTTPS is a network protocol built by SSL/TLS+HTTP with encrypted transmission and identity authentication

Interviewer: What is SSL/TLS and how is HTTPS security implemented?

  • Friend: Secure Socket Layer (SSL) is a protocol encryption Layer based on HTTPS to protect data privacy. Transport Layer Security (TLS) is an updated version of SSL
  • Friend: HTTPS adds a layer of security authentication and encryption layer TLS or SSL on the basis of HTTP. It will first pass the ca certificate authentication through the security layer to correctly obtain the public key of the server
  • Friend: Then the client will confirm an encryption algorithm through the public key and the server, and later data can be encrypted using this encryption algorithm

Interviewer: Can you elaborate on the TLS/SSL authentication process? (The interviewer's mobile phone on the desk vibrates. He automatically looks at it and pauses.)

The friend interview is over for now (continue next time)

Welcome to correct any errors in the text

Refer to the article

About (Moment For Technology) is a global community with thousands techies from across the global hang out!Passionate technologists, be it gadget freaks, tech enthusiasts, coders, technopreneurs, or CIOs, you would find them all here.