Computer network

directory

An overview of the

The basic concept

bandwidth

There are two meanings

The first meaning:

“Bandwidth” means the frequency band width of the signal. The basic unit is the Hertz.

For example, the wifi frequency is 2.4g and 5G

The second meaning:

“Bandwidth” is synonymous with the highest data rate that a digital channel can transmit, in bits per second (bit/s).

For example, the upstream bandwidth is 100 MB and the downstream bandwidth is 1 MB

throughput

Represents the amount of data passing through a network (channel, interface) in unit time. Throughput is more often used as a measure of a real-world network to see how much data is actually passing through the network. Throughput is limited by the bandwidth of the network or the rated rate of the network.

Simply speaking, it is the amount of data that passes through a port in a period of time, similar to the concept of current size and water flow size

Time delay

The time it takes for data to travel from one end of a network (or link) to the other.

Send time delay

The time it takes for a host or router to send a data frame.

Sending delay = Data frame length Sending rate Sending delay = {Data frame length \over Sending rate} Sending delay = Sending rate Data frame length

Restricted by the network adapter

Transmission delay

The time it takes for an electromagnetic wave to travel a certain distance in a channel. Transmission time = channel length Transmission rate Transmission delay = {channel length \over Transmission rate} Transmission delay = Transmission rate Channel length

Limited by the transmission medium, optical fiber is faster than copper wire

Queuing delay

The delay experienced by packet queuing in node cache queue.

Intermediate nodes and last nodes may have data waiting to be processed and queued up

To deal with time delay

The time it takes a switch node to do some processing for store and forward.

After receiving the message, process it

utilization

Channel utilization indicates what percentage of a channel’s time is used (with data passing). A completely idle channel has zero utilization. Network utilization is the weighted average of channel utilization of the whole network.

The higher the utilization rate is, the higher the utilization rate will be, too high will be blocked, too low will cause a waste of resources

Exchange way

Layered system

The physical layer

The basic concept

The primary task of the physical layer is described as identifying four characteristics of the interface with the transport media.

Mechanical characteristics:

Specify the shape and size of the connector used for the interface, number and arrangement of leads, etc.

Electrical characteristics:

Specifies the range of voltages that occur on each line of an interface cable.

Features:

Specifies the meaning of the voltage present at a level on a line.

Process characteristics:

Specifies the order in which various possible events occur for different functions

Data communication system model

Transmitter: Converts data into a signal that can be transmitted over the transmission medium

Basic terminology

Data: The entity that transports the message. Signal: Electrical or electromagnetic representation of data. Analog signal: The values of parameters representing messages are continuous. Digital signal: The values of parameters representing messages are discrete. Channel: A channel through which information is transmitted in a certain direction.

Communication mode

Unidirectional communication (simplex communication) : there can be communication in one direction and no interaction in the other direction. Two-way alternating communication (half duplex communication) : both sides of communication can send information, but not both sides can send and receive information at the same time. Two-way simultaneous communication (full duplex communication) : The two communicating parties can send and receive messages simultaneously.

Conversion relationship between source data and transmission signal

Modulation: The use of a carrier wave to modulate a digital signal by moving its frequency range to a higher frequency band and converting it into an analog signal for transmission in a modelled channel. Demodulation: To restore a received analog signal to a digital signal.

Common modulation mode

Base band modulation

Also known as encoding, after conversion is still baseband signal

2. Bandpass modulation

The carrier is converted from low frequency to high frequency, which is better transmitted over analog channels. The modulated signal is called a band signal

The limit capacity of the channel

1. Nyquist’s theorem

In any channel, there is an upper limit to the rate of symbol transmission, beyond which serious inter-symbol crosstalk problems will occur.

If the frequency band of the channel is wider, the symbols can be transmitted at a higher rate without cross talk.

2. Shannon formula

The limiting information transmission rate of a channel with limited bandwidth and gaussian white noise interference

C = W l o g 2 ( 1 + S / N ) C = W log_2(1+S/N) C=Wlog2​(1+S/N)

W is the bandwidth of the channel (in Hz); S is the average power of the signal transmitted in the channel; N is the noise power inside the channel. SNR S/N is usually expressed in decibels (dB) :

By encoding, the amount of information carried by each code element can be increased

Channel multiplexing

Frequency division multiplexing

The available frequency band of the channel is divided into several narrow sub-bands, each of which transmits a signal. After the user is assigned a certain frequency band, it occupies this frequency band from beginning to end in the communication process.

Frequency division multiplexing of light: wavelength division multiplexing

Time-division multiplexing

Synchronous TDM

Time is divided into equal time slots, and each user occupies a fixed serial number of time slots to transmit data. The time slots occupied by each user occur periodically.

All users of TDM occupy the same bandwidth at different times

Statistical TDM (asynchronous TDM)

Statistics are made first, and then the data to be sent is TDM in turn. However, since each time is uncertain, address information needs to be added to the data frame

Code division multiplexing

Each user is assigned a sequence of chips that are orthogonal to each other,

When a 1 needs to be sent, the sequence is sent

When a 0 needs to be sent, the sequence inverse is sent

So the inner product of the sequence of users and the sequence of other users is 0

The normalized inner product of sequence and sequence is 1, and the normalized inner product of sequence and sequence inverse code is -1

Data link layer

The basic concept

The reason for setting the data link layer
  • There are many kinds of transmission media, different communication procedures and different performance;
  • In the physical layer, data is transmitted in bits, which is inefficient.
  • On the original physical line, there may be errors in signal transmission, and the physical layer does not know whether there are errors;
  • From the perspective of network reference model, all layers above the physical layer are responsible for improving the quality of data transmission, and the data link layer is the most important layer.
Sets the destination of the data link layer

On the basis of the original physical transmission line with errors, error detection, error control and flow control are adopted to improve the physical transmission line with errors into the data link without errors in logic and provide high quality service to the network layer.

There are two main types of channels
  • Point-to-point channel: This channel uses one-to-one point-to-point communication.
  • Broadcast channel: This channel uses one-to-many broadcast communication. There are many hosts connected in the broadcast channel, and the dedicated shared channel protocol must be used to coordinate the data transmission of these hosts.
Link (link)

A physical line from one node to an adjacent node without any other switching nodes in between.

  • In data communication, the communication path between two computers often needs to pass through multiple such links, also known as physical links.
  • A link is only a component of a path.
  • Physical wiring: point-to-point (node to adjacent node)
  • Physical lines: wired, wireless, LAN
Data Link

Data link, also known as logical link, is formed by adding the hardware and software of communication protocol to the link.

  • Adapters (or network cards) to implement the hardware and software of these protocols;
  • Adapter: generally includes data link layer and physical layer functions;
  • A logical link is built on a physical link.
  • If all physical lines are disconnected, it is impossible to establish a logical link for data transmission.
The basic function
Encapsulated into a frame
  • Data link layer transmission unit is “frame”.
  • Assemble datagrams into “frames” that are sent, received, and unsealed correctly.
Transparent transmission
  • No matter what kind of bit combination, it can be effectively transmitted on the data link.
  • Distinguish between data information and control information.
Error control
  • Transmission errors are inevitable in data communication, so it is necessary to reduce error rate and improve reliability.
  • Whenever possible, detect errors and correct them.
Flow control
  • Control the flow of data communication to ensure the orderly progress of data communication;
  • Avoid data loss caused by the receiver receiving data too late.
Link management
  • When communicating, the two parties need to establish a data link first. Maintain data links when transmitting data; Release the data link when the communication is complete.
The MAC address
  • MAC sub-layer main functions.
  • Indicates the MAC address of the NIC, namely, physical address and hardware address.

Three basic questions

Encapsulated into a frame
  • Frame encapsulation is to add a header and a tail respectively before and after a piece of data to form a frame.
  • One of the most important functions of the header and tail is frame delimiting.

Each frame has a maximum length limit

Transparent transmission
  • Data of any combination of bits can be transmitted through the data link layer. It can also be said that the data link layer is transparent to this data.
  • Byte padding or character padding: The data link layer at the sending end inserts an escape character “ESC” before the control character “SOH” or “EOT”. The data link layer at the receiving end removes the inserted escape character before sending the data to the network layer.

Prevent misjudgment by adding characters

This is similar to the output character ‘\’ in THE C language, which is represented by \\

Error detection
  • Bit errors can occur during transmission: 1 becomes 0 or 0 becomes 1.
  • The ratio of the number of incorrectly transmitted bits to the total number of transmitted bits over a period of time is called BER.
  • In order to ensure the reliability of data transmission, various error detection measures must be adopted in computer network data transmission.
  • Error detection code: error correction code and error detection code

At the sending end:

The data is divided into groups of K bits, followed by n redundancy codes

For example, if M is 101001 (k is 6), the length of the data sent is K + N

The method of obtaining the n(predetermined) bit redundancy code is as follows:

2n∗M 2^n * M 2n∗M=101001000

2. Calculate 2n∗M/P 2^n*M/P 2n∗M/P (where P is also determined in advance, set as 1101)

2nM+R 2^nM+R 2nM+R 101001001

The receiver:

Divide this by P and see what’s left over

  • If the remainder R = 0, the frame is judged to be correct and accepted.
  • If the remainder R≠0, the frame is judged to be wrong and discarded.

The PPP protocol

  • Point-to-point Protocol (PPP) is the most widely used wan data link layer Protocol in the world.
  • PPP is generally used when users use dial-up telephone lines to access the Internet. PPP is a data link layer protocol used by a computer to communicate with an ISP.
PPP function
  1. Encapsulated into a frame
  2. Transparent transmission
  3. Error detection
  4. Supports multiple network layer protocols
  5. Supports multiple types of links
  6. Checking connection Status
  7. Data compression negotiation
  8. Network-layer address negotiation
  9. Authentication support
The frame format
  1. Flag field: Indicates the start of a frame

  2. Address field: because it is point-to-point, do not set the address, directly set to FF

  3. Control field: no meaning given at present

  4. Protocol:

    1. When the protocol field is 0x0021, the information field of the PPP frame is the IP datagram.
    2. If the value is 0xC021, the information field is PPP link control data.
    3. If the value is 0x8021, it indicates network control data.
  5. IP datagrams: not more than 1500 bytes, if too long will be fragmented, the network layer has solved the length problem

  6. FCS: Validates fields

  7. End field: Indicates the end of a frame

Transparent transmission

Because the 0x7E of the flag field is marked with the binary flag 01111110, that is, with six zeros in the middle, the zero-bit padding is adopted to avoid errors, that is, the sender fills a zero for every five ones, and the receiver deletes the following zero for every five ones

PPP running status
  1. When a user dials into an ISP, the router’s modem confirms the call and establishes a physical connection.

  2. The PC sends a series of LCP packets (encapsulated into multiple PPP frames) to the router to establish a data link connection. These groups and their responses select some PPP parameters.

  3. At the network layer, NCP assigns a temporary IP address to the newly connected PC, making the PC become a host on the Internet.

  4. When the communication is complete, NCP releases the network layer connection and takes back the original IP address. Next, LCP releases the data link layer connection. Finally, the physical layer connections are released.

    PPP includes link layer control protocol and network layer control protocol

    LCP is a link control protocol used to establish, delete, and monitor the link layer. You can establish a point-to-point link layer, monitor link status, avoid loops, and provide PAP and CHAP authentication

    NCP is the network control protocol, commonly used IPCP, IPXCP. A protocol used to negotiate the network layer

Local area network (LAN)

LAN Topology

Channel sharing technology
Statically divided channel
  • Frequency division multiplexing
  • Time-division multiplexing
  • WDM
  • Code division multiplexing
Dynamic Media Access Control (Multipoint Access) :

Channels are not assigned to users in a fixed way when they communicate.

  • Random access

    All users can send messages randomly. If two or more users happen to be sending messages at the same time, a collision will occur on the shared channel, causing all of these users to fail.

  • Controlled access

    Users cannot send random messages and must submit to certain controls.

    • Training in rotation system

      How it works: the master station in turn asks the secondary station whether there is data to send.

    • Token ring

      A sends data to D

      1. A intercepts the empty token, encapsulates it into A data frame and sends it
      2. B forwarding
      3. C forwarded
      4. D copies and accepts the data frame and forwards the original frame
      5. A takes this data frame and restores the empty token

Ethernet

DIX Ethernet V2 was the world’s first local-area network product (Ethernet) specification, defining a baseband bus LAN with a passive cable-bus. IEEE 802.3 standard.

The two standards are similar and are generally treated equally

MAC
  1. The IEEE 802 standard specifies a 48-bit global address for a LOCAL area network (LAN), which is the address solidified in the ADAPTER ROM of every computer on the LAN.

  2. RA, IEEE’s registration authority, is responsible for assigning the first three bytes of the address field (i.e., the top 24 bits) to the vendor. The last three bytes in the address field (the lowest 24 bits) are designated by the manufacturer.

  3. When a MAC frame is received

    1. If the frame is sent to this site, accept it and then do other processing.
    2. Otherwise, the frame is discarded and no further processing is done.
  4. “Frames sent to this site” includes the following three types of frames:

    1. Unicast frame: one party sends and one party receives
    2. Broadcast frame: sent by one party and received by all
    3. Multicast frame: sent by one party and received by multiple members
  5. MAC frame format

    The type field is used to indicate the protocol used by the upper layer so that the data received from the MAC frame is forwarded to the upper layer. In 802.3, it can also indicate the length

    The data part is the IP datagram passed down from the previous layer

    The minimum frame length of 46 bytes is 64 bytes because Ethernet CSMA/CD sets the minimum frame length of 64 bytes, which leaves 46 bytes after removing the head (6+6+2=14) and tail (4). If the data length is less than 46 bytes, several bytes will be filled to make up for it

    In PPP, there are flag fields at the beginning and end of a frame, marking the beginning and end of a frame. In MAC, a pre-sync code is added to mark the beginning of a frame, but the receiver will not receive them

    The first field of the 8 bytes inserted in front of the frame is the 7 bytes pre-sync code, which is used to quickly synchronize the bits of the MAC frame. The second field is the frame start delimiter, indicating that the following information is a MAC frame.

  6. Invalid MAC frame

    1. The length of the data field is inconsistent with the value of the length field;
    2. The frame length is not an integer of bytes;
    3. Detect errors with the received frame check sequence FCS;
    4. The length of the data field is between 46 and 1500 bytes.

    Any invalid MAC frames detected are simply discarded. Ethernet is not responsible for retransmitting discarded frames. The transport layer is responsible

Extended Ethernet
Extend Ethernet at the physical layer
  • Use hub scaling Use hubs to connect to a larger, multilevel star-shaped Ethernet.

    The original collision domain is independent, but the collision domain is enlarged after connecting with the hub

  • advantages

    1. Enables computers on Ethernet networks originally belonging to different collision domains to communicate across collision domains.
    2. Expanded the geographic coverage of Ethernet.
  • disadvantages

    1. The collision domain is larger, but the overall throughput is not.
    2. If different collision domains use different data rates, they cannot be connected by hubs.
Extend Ethernet at the data link layer

  • Ethernet switch

    1. define

      1. Also called switched hub.
      2. Each interface is directly connected to a single host or another Ethernet switch and generally operates in full-duplex mode.
      3. Ethernet switching can connect multiple pairs of interfaces at the same time, so that multiple pairs of hosts can monopolize the transmission media and transmit data without collision.
    2. advantages

      1. Plug and play
      2. By using the special switching structure chip, the forwarding rate of hardware is much faster than that of software.
      3. When switching from a shared bus Ethernet to a switched Ethernet, the software, hardware, and adapters of all connected devices do not need to be changed.
      4. Generally have a variety of speed interface, convenient for a variety of different situations of users.
    3. Way forward

      1. The store-and-forward method caches the entire data frame before processing it.

        The disadvantage is slow speed

      2. In straight-through forwarding mode, when receiving a frame, the forwarding interface of the frame is determined according to the destination MAC address of the frame, thus improving the forwarding speed of the frame.

        The disadvantage is that it cannot be checked by error

    4. Self-learning function

      The Ethernet switch runs a self-learning algorithm to process the received frames and set up the switch table

      Steps for self-learning and forwarding frames

      1. The switch learns from itself after receiving a frame. Looks for an item in the swap table that matches the source address of the received frame. If not, add an item (source address, incoming interface, and expiry time) to the swap table. If so, update the original project.

      2. Forward frames. Looks for an item in the swap table that matches the destination address of the received frame.

        1. If not, it is forwarded to all other interfaces except the incoming interface.

        2. If yes, the packets are forwarded based on the interface specified in the switching table.

          If the interface given in the switch table is the interface through which the frame enters the switch, then the frame should be discarded (because it does not need to be forwarded through the switch).

    5. Spanning tree protocol

      Cause: Redundancy is required to maintain normal operation even when some links fail

      When redundant links are added, the process of self-learning can cause Ethernet frames to circle indefinitely in a loop of the network.

      To solve this problem, THE IEEE 802.1d standard has developed a SpanningTree Protocol (STP).

      The main point is that the actual topology of the network is not changed, but some links are logically cut off so that the path from one host to all other hosts is a loop-free tree structure, thus eliminating the circling phenomenon.

CSMA/CD

Carrier sense multipoint access/collision monitoring

The main points of
  • Multipoint access: Indicates that many computers are connected to a bus in multipoint access mode.
  • Carrier sense: before sending data, each station should check whether there are other computers on the bus transmitting data. If so, do not send data temporarily in case of collision.
  • Collision detection: the computer sends data while detecting whether there is a collision on the channel. By checking the size of the signal voltage value on the bus to achieve.
Influence of propagation delay on carrier listening

When the station that sent the data once discovered that there was a collision

  1. Stop sending data immediately
  2. A few more bits of human interference are sent to let all users know that a collision has now occurred.
The characteristics of
  1. Ethernet using THE CSMA/CD protocol cannot carry out full duplex communication but only two-way alternating communication (half duplex communication).
  2. For a short period of time after each station sends data, there is the possibility of a collision.
  3. This uncertainty of transmission causes the average traffic across Ethernet to be much less than the highest data rate of Ethernet.
Contention period (collision window)

The station that sends the first data frame knows at most (2τ) whether the sent data frame was hit or not after sending the data frame. The end-to-end round-trip delay 2τ of Ethernet is called the contention period, or collision window. You can be sure that this send will not collide if no collision has been detected during the contention period.

Truncated binary exponential backout algorithm

The station in the collision is delayed (retreated) for a random amount of time after it stops sending data.

  1. The basic retreat time is usually taken as the contention period 2τ.
  2. Define retransmission times k, k ≤ 10, that is, k = Min[retransmission times, 10]
  3. Pick a random number from the set of integers [0,1…, (2k-1)] and call it r. The delay required for retransmission is r times the basic backoff time.
  4. When retransmission fails 16 times, the frame is discarded and reported to higher management

Function:

  1. Reduces the probability of a retransmission collision.
  2. If the conflicts occur repeatedly, it indicates that there may be more stations participating in the contention channel. By using the backout algorithm, the average delay time for retransmission increases with the retransmission times (also known as dynamic backout), so the probability of collision is reduced, which is beneficial to the stability of the whole system.
Ethernet (10M) indicator

Length of contention period: 51.2 µs

Minimum valid frame length: 64 bytes

If a collision occurs, it must be within the first 64 bytes sent. Since sending is aborted as soon as a collision is detected, less than 64 bytes of data must have been sent. Therefore, Ethernet sets the minimum valid frame length to 64 bytes, and any frame shorter than 64 bytes is an invalid frame aborted due to collision.

Minimum interval between frames: 9.6 µs

After a station detects that the bus is idle, it has to wait 9.6 µs before sending data again

To process the frame just received

The reason for requiring the shortest frame

Because the node detects whether a collision occurs according to the signal voltage, but it does not know which frame has a collision. In order to make the packet can be retransmitted after the collision, the sender must know that the frame it sends has a collision

So the frame should be long enough that it cannot be sent until it reaches its destination, that is, the sending frame detects a collision and is sending data, at which point it knows that the frame it is sending is invalid, and can be resent later

Quote a paragraph from Xie Xiren’s Network principle

The network layer

The network layer provides two types of services

TCP/IP protocol

IP address of the class

Each type of address consists of two fixed-length fields, one of which is the network number net-ID, which identifies the network to which the host (or router) is connected, and the other is the host number host-ID, which identifies the host (or router).

Several Lans connected by forwarders or Bridges are still one network, so they all have the same network number net-ID.

It is in dotted decimal notation

Three types of COMMON IP addresses

A: Number of networks minus 2 Cause: All 0s indicate the local network. 127(01111111) indicates the address of the local software loopback test

B, C: The number of networks is reduced by 1 Cause: 128.0.0.0 and 192.0.0.0 are not assigned

The number of hosts decreased by 2 Cause: All 0s and all 1s are not assigned

A special IP address that is not generally used

Layered forwarding process

The routing table

Routing tables need to be configured or generated based on algorithms

The next hop is the address of the next router

Specific routes and default routes

Host-specific route: Specifies a route for a specific destination host.

Default route: The default route is used without specific Settings

Router packet forwarding algorithm
  1. Based on the IP address of the target host, determine the target network N

  2. Check whether network N is directly connected to the router

    1. Connected to a

      Direct delivery to the target host

    2. Not connected to

      1. A specific host route is configured and delivered to the next hop
      2. There is no host-specific route, there is a route to the network, delivered to the next hop
      3. If there is no route to the network, the default route is used and the route is delivered to the next hop
      4. There is no default route and a forwarding failure is reported

Address resolution protocol ARP

Function: Resolves hardware addresses used at the data link layer from IP addresses used at the network layer.

Each host has an ARP cache, which stores the mapping table from the IP address of each host and router on the LAN to the hardware address. ARP sets a lifetime for each mapped address item stored in the cache, and removes any item that exceeds the lifetime from the cache.

ARP working process

When host A wants to send an IP packet to host B on the local LAN, host A checks whether host B’s IP address is available in the ARP cache.

  • If yes, you can find out the corresponding hardware address, write the hardware address into the MAC frame, and then send the MAC frame to the hardware address over the LAN.
  • If not, the ARP process broadcasts an ARP request packet on the local LAN. After receiving an ARP response packet, the device writes the mapping from the IP address to the hardware address to the ARP cache
  • ARP resolves the mapping between IP addresses and hardware addresses of hosts or routers on the same LAN.
  • The resolution from IP address to hardware address is automatically performed, and the host user is unaware of the resolution process.
  • Whenever a host or router wants to communicate with another host or router with a known IP address on the local network, ARP automatically resolves the IP address to the hardware address required by the link layer.

If the problem is between different networks, you need to use routers to solve the problem

For example, H1 accesses H3

  1. Because H3 is not on this network, router R1 is first found through ARP in the local network, and the rest is done by R1

  1. R1 finds H3 again on network 2 via ARP

IP datagram format

An IP datagram consists of a header and data.

The header is divided into a fixed part and a variable part. The length of the fixed part is 20 bytes, and the length of the variable part is variable.

Fixed part

Version IP protocol version: ipv4 or ipv6

Header length: 4 bits, the maximum number that can be represented is 15 (24-1) units (a unit is 4 bytes). Therefore, the maximum length of the IP header is 60 bytes (15 x 4).

Differentiated service: takes 8 bits. This field is only useful if you use DiffServ. In general, this field is not used.

Total length: indicates the length of the header and data, in bytes. Therefore, the maximum length of a datagram is 65535 bytes.

The reason for sharding datagrams

  1. The total length of an IP datagram must not exceed the MTU, the maximum transmission unit at the data link layer.
  2. If the length of the transmitted datagrams exceeds the MTU value of the data link layer, the length of the transmitted datagrams must be fragmented.

Identifier: 16 bits. It is a counter used to generate the identifier of the IP datagram.

IP datagrams with the same identity belong to the same IP datagrams that are not fragmented

Flag: 3 bits, only the first two are meaningful at present.

The lowest level of the flag field is More Fragment (MF). MF = 1 means “sharding”. MF = 0 indicates the last shard.

The middle bit of the flag field is DF. DF = 1 indicates that sharding is not allowed. MF = 0 indicates sharding is allowed.

Slice offset: accounting for 13 bits. It indicates the relative position of a slice in the original group after a longer slice is sharded. The offset unit is 8 bytes.

TTL – 8 bits, denoted as Time To Live (TTL), indicating the lifetime of a datagram in the network. Represents the maximum number of routers through which a datagram can pass in the network.

Used to prevent the destination address of a datagram from being invalid and being forwarded indefinitely in the network, with the survival time of each forwarding reduced by 1

Protocol: 8 bits, indicating which protocol is used for the data carried by this datagram, so that the IP layer of the destination host submits the data portion to which process.

It can be TCP, UDP, or IP routing protocol

Header check and: 16 bits, only the header of the datagram is checked, not the data part

The data part is checked for errors by the transport layer

The variable part

Optional fields: Variable in length, ranging from 1 to 40 bytes, depending on the project selected, to support troubleshooting, measurement, and security measures. It’s actually rarely used.

Subnets are divided and supernets are constructed

Divide the subnet

A Subnet NUMBER field is added to the IP address to change a level-2 IP address into a level-3 IP address.

Subnet mask

It is a 32-bit string of 1s and 0s, where 1 corresponds to the network ID and subnet ID of the original IP address, and 0 corresponds to the host ID.

Default subnet mask

(IP address) AND (Subnet mask) = Network address

Unclassified addressing

Select Classless Inter-domain Routing (CIDR).

CIDR eliminates the traditional concepts of Class A, B, and C addresses and subnets, thus allowing for A more efficient allocation of IPv4 address space.

CIDR uses network-prefixes of various lengths to replace network and subnet numbers in classified addresses.

notation
Slash notation (CIDR notation)

For example: 128.14.35.7/20

Address mask

For example, the mask of the /20 address block is 255.255.240.0

Routing aggregation

Contiguous IP addresses with the same network prefix form CIDR address blocks.

A CIDR address block can contain many addresses. CIDR address blocks are used in routing tables to find the destination network. Therefore, this aggregation of addresses is called route aggregation. Route aggregation is also called forming a hypernet.

Routing protocol

Internal Gateway protocol

RIP
  • IP is a distributed routing protocol based on distance vector.

  • RIP requires each router on the network to maintain a distance record from itself to each other.

    Definition of distance:

    • The distance from a router to a directly connected network is defined as 1.
    • The distance from a router to an indirect network is defined as the number of routers passed plus one.
    • RIP considers that a good route is one that passes through a small number of routers, that is, a short distance.
    • A RIP path can contain a maximum of 15 routers.
    • A maximum of 16 is considered unreachable.
Key points of work
  • Exchange information only with neighboring routers.
  • The information exchanged is all that the local router currently knows, that is, its own routing table.
  • Routing information is exchanged at fixed intervals, for example, every 30 seconds.
  • When the router starts working, it only knows the distance to the directly connected network (defined as 1).
  • Henceforth, each router will only exchange and update routing information with a very limited number of neighboring routers.
  • After several updates, all routers eventually know the shortest distance to any network in the autonomous system and the address of the next-hop router.
  • RIP converges quickly, that is, all nodes in an AS obtain correct routing information
The process of updating routes

Distance vector algorithm

Received a RIP packet from the neighboring router (whose IP address is X) :

  1. Modify all items in the RIP packet first: Change the address in the Next Hop field to X, and increase the value of all distance fields by 1.

  2. Repeat the following steps for each item in the modified RIP packet:

    1. If the destination network of an item is not in the routing table, the item is added to the routing table.

    2. Otherwise,

      1. If the router addresses specified in the next hop field are the same, the router replaces the received items with those in the original routing table.
  3. Otherwise,

    1. If the distance in the received item is less than the distance in the routing table, it is updated. 2. Otherwise, do nothing.
  4. If the neighboring router does not receive the updated routing table within 3 minutes, the neighboring router is recorded as an unreachable router, that is, the distance is set to 16 (16 indicates unreachable).

  5. return

Example: R6 receives the routing table from R4

R4 started first

Increase the distance by 1 and change the next hop to R4

Contrast with the original R6

Update after final results

Advantages and disadvantages of RIP

Advantages: Simple implementation and low overhead.

Disadvantages:

  1. RIP limits the network scale.

    Because the maximum distance is 15, beyond 16 it is considered unreachable

  2. When a network failure occurs, it takes a long time for this information to be transmitted to all routers.

    Good news travels fast, bad news slow

  3. The routing information exchanged between routers is the complete routing table in routers, so the overhead increases as the network size increases.

    A large amount of routing information is recorded

OSPF

OSPF is a distributed link-state protocol.

  • Use flooding to send information to all routers in the autonomous system.

  • The information sent is the link state of all routers adjacent to the local router.

    The link state indicates which routers are adjacent to the router and the metric of the link.

  • The router sends this message to all routers by flooding only when the link status changes.

  • Because of the frequent exchange of link state information among routers, all routers can eventually establish a link state database.

  • A LINk-state database is actually a topology map of the entire network, which is consistent across the entire network (this is called link-state database synchronization).

  • Each router uses data from the link-state database to construct its own routing table.

    Using dijkstra shortest path algorithm to calculate

OSPF area
  • To enable OSPF to apply to large-scale networks, OSPF divides an AS into several smaller areas, called areas.
  • The advantage of partitioning is that the exchange of link-state information by flooding is limited to each region rather than the entire autonomous system, which reduces the traffic over the entire network.

OSPF group type
  1. Greeting (Hello) groups. Used to discover and maintain the accessibility of adjacent stations.
  2. Database Description group. Give the summary information of its link state database to the neighboring station.
  3. Link State Request group. Request to send details of certain link-state items to the other party.
  4. Link State Update (Link State Update) group, using flooding method to Update the Link State of the entire network.
  5. Link State Acknowledgment Packet, Acknowledgment of a Link update packet.

Updated renderings of flooding

External Gateway protocol

BGP
Why can’t the internal gateway protocol be used between autonomous Systems?

The size of the Internet makes routing between autonomous systems difficult.

Different autonomous systems define costs differently

Routing between autonomous systems must take into account relevant policies.

Design to national security issues

The working principle of
  • BGP tries to find a good route to the destination network (without going around in circles) rather than an optimal route.

  • BGP uses the path vector routing protocol.

  • The administrator of each AS must select at least one router as the BGP spokesperson for the AS.

    Generally, the border router is selected

  • BGP spokesmen exchange routing information with BGP spokesmen of other aS.

  • The network reachability information exchanged by BGP is a series of ass that must be passed to reach a network.

  • After the BGP spokesmen exchange network reachability information with each other, the BGP spokesmen find the preferred route to each AS from the received route information based on the adopted policy.

The characteristics of
  • The order of nodes that BGP exchanges routing information is the same as the number of ass.
  • To find a better path between many autonomous systems, find the right BGP spokesperson.
  • BGP supports CIDR. Therefore, a BGP routing table should include the destination network prefix, the next-hop router, and the sequence of autonomous systems to reach the destination network.
  • When BGP is just running, BGP neighbors exchange the entire BGP routing table. But in the future, you only need to update the changed parts as they change
Four kinds of message
  1. OPEN the message, which is used to establish a relationship with the neighboring BGP spokesperson.
  2. An UPDATE message is used to send information about a route and list the routes to be undone.
  3. KEEPALIVE messages are used to confirm the opening of the message and periodically confirm the neighbor relationship.
  4. NOTIFICATION messages are used to send detected errors.

Transport layer

The basic concept

The transport layer provides communication services to applications and distinguishes them through ports

The transport layer is used only in the host, while the network layer is used most heavily in the router

Transport layer data is wrapped in network layer datagrams, but the network layer does not access the data

The main differences between transport layer protocols and network layer protocols

Ports for the transport layer

  • A process identifier identifies a process running on a computer.
  • In the transport layer, the protocol port number (or port for short) is used to mark the APPLICATION process of TCP/IP system, so that the application process of computers running different operating systems can communicate with each other.
  • TCP/IP transport layer ports are identified with a 16-bit port number.
Two types of port
  1. Port number used by the server

    1. Know the port number or system port number, known as 0-1023
    2. Register the port number, 1024-49151, for applications that are not familiar with the port number. Port numbers using this range must be registered with IANA to prevent duplication.
  2. Client port number or transient port number

    The value is 49152-65535, leaving the option for the client process to use temporarily.

UDP

User datagram protocol

The characteristics of

  1. UDP adds only a few features to IP’s datagram service:

    1. Reuse and reuse capabilities
    2. Error detection capabilities
  2. UDP is connectionless

  3. UDP uses best effort delivery, that is, reliable delivery is not guaranteed.

  4. UDP has no congestion control and is well suited for multimedia communications.

  5. UDP supports one-to-one, one-to-many, many-to-one, and many-to-many interactive communications.

  6. The header of UDP has a small overhead of only 8 bytes.

  7. UDP is packet oriented.

    UDP does not merge or split packets from the application layer, but retains the boundaries of the packets

The first format

Length: indicates the length of the UDP user datagram. The minimum value is 8, indicating that there is no data

Validation sum: validation sum of entire user datagrams is added for validation

TCP

The first format

Divided into fixed head and variable head

Fixed the first

Ordinal field: 4 bytes. The value of the ordinal field refers to the ordinal number of the first byte of the data sent in this paragraph

The confirmation number field is the serial number of the first byte of the data expected to be received in the next segment of the packet.

For example, USER B correctly receives A packet segment from USER A. The serial number field is 501 and the data length is 200 bytes (serial number 501-700). This indicates that B correctly received the data sent by A up to serial number 700. Therefore, B expects to receive the next data number 701 from A.

If the confirmation number is N, it indicates that all data up to serial number N-1 has been correctly received.

Data offset (header length) : contains four bits. It indicates the distance between the start of the DATA in the TCP packet segment and the start of the TCP packet segment. The unit of “data offset” is a 32-bit word (4 bytes,4*8=32).

Reserved field: contains 6 bits, reserved for future use, but should be set to 0 at present.

Emergency URG: When URG = 1, it indicates that the emergency pointer field is valid. It tells the system that there is urgent data in this message segment and that it should be transmitted as soon as possible (equivalent to high-priority data).

Urgent urgent

ACK: The ACK number field is valid only when ACK = 1. When ACK = 0, the confirmation number is invalid

Acknowledge admitted

PuSH PSH (PuSH) : TCP receives a packet segment with PSH = 1 and delivers it to the receiving application process as soon as possible instead of waiting until the entire cache is filled up.

ReSeT RST: When RST is 1, it indicates that a serious error occurs in the TCP connection (for example, the host crashes or other reasons) and the connection must be released before the transport connection is set up again

SYN: Synchronizes SYN. SYN = 1 indicates that this is a connection request or connection accept packet

A synchronized synchronous

Terminate FIN (FINis) : Used to release a connection. FIN = 1 Indicates that the data on the sender end of the packet segment has been sent and the transport connection is released.

Window field: 2 bytes, used to set the basis of the sending window, in bytes.

For example, in the packet sent from A to B, the confirmation number is 701 and the window field is 1000. Indicates that A, which sends this packet segment, has received cache space for receiving 1000 bytes of data from 701. B is told to send 1000 bytes.

Check sum: 2 bytes. Verification and field calculation methods are the same as UDP.

Emergency pointer field: 16 bits, indicating the number of bytes of emergency data in the column (emergency data is placed first in the column data).

The variable first

Option field: Variable length. TCP initially provides only one option, the maximum packet segment length (MSS). MSS is the maximum length of a data field in a TCP packet segment.

Fill field: This is so that the entire header length is a multiple of 4 bytes.

Primary use of

How reliable transmission works

Ideal transmission conditions
  1. The transmission channel does not produce errors.

  2. No matter how fast the sender sends data, there is always time for the receiver to process the received data.

    Under such ideal transmission conditions, reliable transmission can be achieved without taking any measures. However, the actual network does not have the above two ideal conditions. Some reliable transport protocols must be used to achieve reliable transport over unreliable transport channels.

Stop waiting protocol

Stop sending each packet and wait for confirmation. Send the next packet after receiving confirmation.

  • After a packet is sent, a copy of the sent packet must be retained temporarily.

    To retransmit after timeout

  • Both group and confirmation groups must be numbered.

  • The retransmission time of the timeout timer should be longer than the average round-trip time of the data in packet transfer.

  • Using the above acknowledgement and retransmission mechanism, we can achieve reliable communication over unreliable transport networks.

  • This reliable transport protocol is often called Automatic Repeat reQuest (ARQ).

  • ARQ indicates that the retransmission request is automatic. The receiver does not need to ask the sender to retransmit an error packet

Channel utilization

Send time TD, round trip time RTT, receive time TA

Channel utilization U=TD TD+RTT+TA U= {\frac {T_D} {T_D+RTT+T_A}} U=TD+RTT+TATD, it can be seen that the channel utilization of the stop wait protocol is very low

Continuous ARQ protocol

Continuous ARQ protocol is based on the sliding window mechanism.

The sender maintains the sending window, which means that the packets located in the sending window can be continuously sent without waiting for the confirmation of the other party.

The serial number allowed to be sent is displayed in the send window. The size of the send window indicates the number of packets that the sender can send consecutively.

The receiver uses the receive window, indicating that the group located in the interface window is allowed to receive

The recipient generally adopts the cumulative confirmation method. That is, you do not need to send an acknowledgement for each received packet one by one. Instead, you send an acknowledgement for the last packet that arrives in sequence. In this way, all packets up to this packet have been correctly received.

Go-back-n: indicates that N packets that have been sent need to be retransmitted again.

For example, if the sender sends the first 5 packets and the middle third packet is lost. At this point, the recipient can only send an acknowledgement to the first two groups. The sender retransmits the next three groups (3,4,5).

Congestion control

At some point in time, if demand for a resource in the network exceeds the available portion of the resource, the performance of the network deteriorates – congestion occurs.

Conditions for resource congestion: $\sum Demand for resources > available resources $

Q: Can the problem of network congestion be solved by increasing some resources arbitrarily, for example, expanding the storage space of node cache, or changing the link to a higher speed link, or increasing the computing speed of node processor?

A: can’t. Because network congestion is a very complex problem. Simply adopting the above approach, in many cases, will not only fail to solve the congestion problem, but may even make the performance of the network worse.

Congestion control comes at a cost.

Generally, when congestion is detected, the information about congestion is transmitted to the source station that generated the packet. Notifying the group that congestion occurs also makes the network more congested. Another approach is to reserve a bit or field in the packet forwarded by the router and use the value of the bit or field to indicate that there is no congestion or congestion occurs. It is also possible for some host or router to issue probe packets periodically to ask if congestion has occurred.

Congestion control and flow control

Congestion control is global to ensure resource availability, while flow control is point-to-point

Example 1: the link transmission rate of the optical fiber network is 1000Gb/s. A giant computer sends files to a PC at a rate of 1Gb/s.

Need flow control, network can fully meet the use, but the computer may not be able to process data so fast, so need flow control

Example 2: the transmission rate of the network link is 1Mb/s, and 1000 large computers are connected. 500 of them send files at 100Kb/s to the other 500.

Congestion control is required. If they are used at the same time, 50M traffic will be required, and network links cannot meet the demand. Therefore, congestion control is required

Congestion control algorithm

The flow chart

CWND congestion window SSthRESH Slow start threshold status variable

When CWND < SSTHRESH, the slow start algorithm is used.

When CWND > SSTHRESH, stop using the slow-start algorithm and use the congestion avoidance algorithm instead.

When CWND = SSTHRESH, either the slow start algorithm or the congestion avoidance algorithm can be used.

Slow start
  1. When the host just starts to send packets, set the congestion window CWND = 1

  2. The congestion window is increased by 1 after each acknowledgement of a new message segment is received

    The final effect is double

    In the first round, 1 packet is sent and 1 packet is received. CWND +1=2

    In the second round, two packets are sent and two packets are received. CWND +2=4

    · · · · · ·

    And so on, every time it’s twice as big

Congestion avoidance algorithm

Slow start is for each confirmation, CWND +1, i.e. each turn is twice as much as the last turn

The congestion avoidance algorithm is each turn, and CWND is the last turn CWND +1

When network congestion occurs

No matter in the slow start phase or the congestion avoidance phase, if the sender determines that the network is congested (the retransmission timer times out), perform the following operations:

Ssthresh = Max (CWND /2, 2) CWND = 1 Executes the slow start algorithmCopy the code

Fast retransmission

The fast retransmission algorithm requires the receiver to send a repeat acknowledgement immediately after receiving an out-of-order segment. This allows the sender to know early that a segment has not reached the receiver, rather than waiting for piggy-in confirmation when it sends data.

Once the sender receives three consecutive acknowledgements, it should immediately retransmit the unreceived segments.

Fast recovery

TCP connection management

TCP transport connections are divided into three phases

  1. Connection is established
  2. Data transfer
  3. Connection release
Before establishing a connection
  • The TCP connection is established in client/server mode

    • The application process that initiates the connection request is called a Client.
    • An application process that passively waits for a connection to be established is called a Server.
  • Both the client and server are aware of each other’s presence

  • Allow both parties to negotiate some parameters:

    • Maximum window value, timestamp, quality of service, etc
  • To be able to allocate transport entity resources:

    • Cache size, items in the join table, etc

The server creates the transport control block TCB in advance and enters the listening state

TCB stores the TCP connection request table, window value, serial number, time stamp and other important information required in the connection.

The client determines the address to access and also creates a transport control block TCB, ready to initiate network access

Connection is established

Three-way handshake

First handshake

The client sends a connection request packet segment to the server

User A sends A connection request packet to user B

The synchronization control bit is SYN = 1 and the initial sequence number is seq = X.

Second handshake

The server returns an acknowledgement message for connection request information

After receiving the connection request packet, B sends a confirmation message if it agrees to the connection request.

Confirm message segment header:

Synchronization control bit SYN = 1, acknowledgement bit ACK = 1, acknowledgement number ACK = X +1, sequence number seq= Y.

Third handshake

The client then sends an acknowledgement packet to the server to confirm the TCP connection

After receiving an acknowledgement packet, A sends an acknowledgement to B. Header: ACK = 1, SEq = x + 1, ACK = Y + 1.

Two handshakes have problems

If network congestion occurs, as in the case of connection request 1 in the figure, the server receives the request and requests resources for the request, but the client ignores the confirmation because the connection has already been established, causing the connection to hold resources and cannot be released

Connection release

Four times to wave

First wave

User A sends A FIN + ACK packet to user B to release the connection

A sends A connection-release packet segment to B. Header: FIN = 1, seq = U; The value of u is: the ordinal number of the last byte of data that has been transmitted plus 1.

Second wave

User B sends an ACK packet to user A to confirm user A’s information

After receiving the connection release packet segment, B sends a confirmation message if it agrees. The connection from A to B is released, and the TCP connection is half-closed. If B also sends data to A, A also receives data; In the header of the packet segment, the acknowledgement bit ACK = 1, acknowledgement number ACK = U +1, and sequence number seq = V.

Third handshake

User B then sends A FIN + ACK packet to user A to close the connection

User B sends A connection release packet segment to user A

Header: ACK = u+1;

Fourth wave

User A sends an ACK packet to user B to confirm the connection release

After receiving the connection release packet from USER B, user A sends an acknowledgement message to user B. After 2MSL, A is closed. TCP Connection Release

The application layer

Domain name System DNS

The domain name structure of the Internet

The Internet adopts the naming method of hierarchical tree structure. Any host or router connected to the Internet has a unique hierarchical name, a domain name. The structure of a domain name consists of a sequence of labels separated by dots

... Level 3 domain name level 2 domain name top-level domain nameCopy the code
Top-level domain names
  1. For example, cn indicates China, US indicates America, and UK indicates Britain.
  2. General top-level domains such as com (companies and enterprises), NET (Web service organizations), org (nonprofit organizations), EDU (educational institutions exclusively in the United States), gov (government departments exclusively in the United States), INT (international organizations), etc
  3. The infrastructure domain name ARPA, used for reverse domain name resolution, is also called reverse domain name.
China’s domain name division

In China, the secondary domain name is divided into category domain name and administrative domain name.

  1. There are 7 domain names in total: com (industrial, commercial and financial enterprises), NET (institutions providing Internet services), ORG (non-profit organizations), EDU (educational institutions), GOV (Chinese government institutions), MIL (Chinese national defense institutions), AC (scientific research institutions)

  2. Administrative domain name a total of 34, applicable to China’s provinces, autonomous regions, municipalities directly under the central government. Such as Beijing BJ, Henan HA, etc.

    There is no relationship between the two mailings circled in the figure

Domain name server

Domain name System (DNS) uses a domain name server (DNS) to resolve domain names to IP addresses.

The area that a server is responsible for (or has authority over) is called an area.

A domain name server is configured for each zone to store the mapping between domain names and IP addresses of all hosts in the zone.

There are four types of DNS servers

  1. Root DNS server

    The root DNS server is the highest level DNS server and the most important DNS server. All root DNS servers know the domain names and IP addresses of all top-level DNS servers.

    If any local DNS server wants to resolve any domain name on the Internet, it first turns to the root DNS server if it cannot resolve it itself.

    There are 13 root DNS servers with different IP addresses on the Internet. Their names are named with one letter, from A to M.

    These DNS servers have many mirror servers

  2. Top-level domain name server

    The TOP-LEVEL domain name server manages all secondary domain names registered with the top-level domain name server.

  3. Permission domain name server

    Responsible for a domain name server. When a DNS server fails to give a final query answer, the DNS client that made the query request is told which DNS server to go to next

  4. Local DNS Server When a host sends a DNS query request, the query request packet is sent to the local DNS server.

    Every Internet service provider ISP, or a university, or even a university department, can have a local domain name server, sometimes called the default domain name server

Domain name resolution
Recursive query

Generally, the host queries the local DNS server recursively.

If the local DNS server queried by the host does not know the IP address of the queried domain name, the local DNS server sends query packets to other root DNS servers as DNS clients.

Iterative query

The local DNS server queries the root DNS server iteratively.

When receiving the iterative query request packet from the local DNS server, the root DNS server either gives the IP address to be queried or tells the local DNS server which DNS server you should query next. Then let the local DNS server perform subsequent queries.

The cache

Each DNS server maintains a cache of recently used names and records of where to get the name mapping information.

This reduces the load of the root DNS server and reduces the number of DNS query requests and reply packets on the Internet.

To keep the contents of the cache correct, the DNS server should set a timer for each item and process items that last longer than a reasonable amount of time (for example, each item is only two days old).

The world wide web WWW

The World Wide Web uses links to easily access rich information on demand from one site on the Internet to another.

The World Wide Web works as a client server. A browser is a web client program on a user’s computer. The computer on which the World Wide Web documents reside runs the server program, also known as the World Wide Web server.

Uniform resource locator

Uniform Resource Locator (URL) is used to identify various documents on the World Wide Web. Each document has a unique identifier URL across the entire Internet.

URL format :< protocol >://< host >:< port >/< path >

The HTTP protocol

The Hypertext Transfer protocol (HTTP) defines how the browser requests web documents from the Web server and how the server sends the documents to the browser.

HTTP protocol uses connection-oriented TCP as the transport layer protocol to ensure reliable data transmission.

Events that occur after the user clicks the mouse
  1. The browser analyzes the URL that the hyperlink points to the page.
  2. The browser sends a DNS request to resolve the IP address of www.tsinghua.edu.cn.
  3. The DNS resolves the IP address of the tsinghua University server.
  4. The BROWSER establishes a TCP connection with the server
  5. The browser issues a file fetch command:
  6. The server responds by sending the file in the specified path to the browser.
  7. The TCP connection was released. Procedure
  8. The browser displays the content of the specified page.
The time required to request a World Wide Web document

Time required to request a World Wide Web document: time between transmission of the document +2∗ time back RTT Transfer time of the document +2* round-trip time RTT Transfer time of the document +2∗ round-trip time RTT.

The HTTP version
HTTP / 1.0

The HTTP/1.0 protocol is stateless. The server does not record information about customers that have been visited. This simplifies server design and makes it easier for the server to support large numbers of concurrent HTTP requests.

The HTTP/1.0 protocol uses non-continuous connections, which adds to the burden on the World Wide Web server.

HTTP / 1.1

The HTTP/1.1 protocol uses persistent connections

The WEB server maintains the connection for some time after sending the response, allowing the same client (browser) and the server to continue sending subsequent HTTP request and response messages over the connection.

Not limited to documents linked on the same page, but as long as those documents are on the same server.

Two ways in which continuous connections work

Non-pipelined: the next request is not sent until the previous response is received. This saves one RTT time required to establish a TCP connection compared to twice the RTT cost of a non-continuous connection.

Pipelined mode: Customers can send new HTTP request packets before receiving HTTP response packets. After one request packet after another reaches the server, the server can send back the response packet continuously. With the pipelined approach, it takes only one RTT time for the customer to access all the objects.

HTTP packet structure
The request message

The client sends request packets to the server.

Method: Request method, such as GET, POST, etc

The response message

From the server to the customer’s answer.

Status code: 200, 404, etc

Dynamic Host configuration protocol DHCP

Dynamic Host ConfigurationProtocol (DHCP) is a Dynamic protocol that provides configuration parameters for network terminals.

DHCP provides plug-and-play networking. This allows a computer to join a new network and obtain an IP address without manual configuration. After a request is made, DHCP can provide the IP address, gateway address, and DNS server address for the terminal and set the parameters.

DHCP works in client/server mode

The DHCP server first looks up the computer’s configuration information in its database. If found, the found information is returned; If not, use the IP address pool. The DHCP server has an IP address pool. When any DHCP enabled client logs in to the network, it can lease an IP address from the DHCP server. The unused IP address is automatically returned to the address pool for redistribution. DHCP uses UDP, the transport layer protocol, so DHCP packets are encapsulated in UDP datagrams.

Address assignment mode
Manual dispatch

A network administrator assigns fixed IP addresses to a few hosts.

Automatically assigned

DHCP assigns fixed IP addresses to certain hosts that access the network for the first time and use these IP addresses for a long time.

Dynamic allocation

Addresses are assigned to clients in the form of “lease”. Each address has a certain lease term. When the lease term expires, the client must apply for or renew the lease

Types of DHCP Messages

DHCP Relay Agent

A DHCP relay agent forwards DHCP packets between a DHCP server and a client.

Each network now has at least one DHCP relay agent configured with THE IP address information of the DHCP server. The DHCP relay agent forwards DHCP discovery packets to the DHCP server in unicast mode and waits for the reply. After receiving the DHCP server reply packet, the DHCP relay agent sends the packet back to the host

The working process
The discovery phase

The phase when a DHCP client looks for a DHCP server

  1. When a DHCP client logs in to the network for the first time and does not have an IP address, it broadcasts a DHCP DISCOVER message to search for the DHCP server
  2. Every TCP/IP host on the network receives the broadcast message, but only the DHCP server responds
Provide stage

The phase when the DHCP server provides the IP address

After receiving the DHCP DISCOVER message, the DHCP server sends a DHCP OFFER message containing the leased IP address and other Settings to the DHCP client.

Select phase

Phase when the DHCP client selects an IP address

If multiple DHCP servers send A DHCP OFFER message to a DHCP client, the CLIENT accepts only the first DHCP OFFER message. The DHCP client broadcasts DHCP REQUEST messages. The answer in broadcast mode notifies all DHCP servers that they will select the IP address provided by a DHCP server.

Confirm stage

Phase in which the DHCP server acknowledges the IP address provided

After receiving the REQUEST packet from the client, the DHCP server sends a DHCPACK packet to inform the DHCP client that it can use the IP address provided by the DHCP server. Except for the server selected by the DHCP client, other DHCP servers retrieve the IP address provided by the DHCP server

Log back in

Each time a DHCP client logs in to the network again, the DHCP client directly sends a REQUEST message containing the previously assigned IP address instead of a DISCOVER message. When the DHCP server receives this message, it tries to keep the DHCP client using the original IP address and replies with a DHCPACK message.

If the IP address cannot be assigned to the original DHCP client (for example, the IP address has been assigned to another DHCP client), the DHCP server sends a DHCP NACK denial message to the client. When the original DHCP client receives this DHCP NACK message, it must send a DHCP DISCOVER message to request a new IP address.

Update the lease

After the lease of the IP address expires, the DHCP server takes back the leased IP address. If a DHCP client wants to extend its IP lease, it must renew its IP lease. When the DHCP client is restarted or the lease reaches 50%, the DHCP client directly sends a DHCP REQUEST package to the lease server to renew the lease. If the client cannot contact the server, the client will enter the reapplication state when the lease reaches 87.5%**.