I put together my previous articles to become Github, welcome everyone big guy star github.com/crisxuan/be…

If computers have brought us from the industrial age to the information age, then computer networks have brought us into the Internet age. With the increasing number of people using computers, computers have also experienced a series of development, from large general purpose computers -> supercomputers -> minicomputers -> personal computers -> workstations -> portable computers -> smart phone terminals and so on are the products of this process. Computer network also gradually evolves from independent mode to network interconnection mode.

As you can see, in standalone mode, everyone has to wait in line for others to finish work on a machine before other users can use it. Such data is managed separately.

Now we switch to the network interconnection mode. In this mode, everyone can use the computer independently, and there will even be a server to provide services for Big Brother, CXuan and Sonsong. Such data is centrally managed.

Computer networks are classified by size, including Wide Area Network (WAN) and Local Area Network (LAN). As shown in the figure below

Above is a local area network, generally used in a small area of the network, a community, a building, office often use LAN.

A network formed over a large distance is usually a wide area network.

At first, there were just fixed computers linked together to form a computer network. Such networks are generally private and inaccessible to other computers. With the development of The Times, people began to try to build a larger private network on the private network, and gradually developed into the Internet, now almost everyone can enjoy the convenience brought by the Internet.

The development of computer networks

The batch

Just like the early computer operating systems, the initial stage of atch Processing was designed to make computers accessible to as many people as possible.

Batch processing is the loading of data into a cassette or tape and the computer reads it in a certain order.

At that time, such computers were expensive and not available to everyone, which objectively implied that only specialized operators could use computers. Users would submit programs to operators, who would queue up to execute the programs and wait for a period of time before users could retrieve the results.

The efficiency of this computer is not well demonstrated, even not as fast as manual calculation.

Time-sharing system

After batch processing came time-sharing systems, in which multiple terminals are connected to the same computer, allowing multiple users to use a computer at the same time. The emergence of time-sharing system realized the purpose of one computer for one person, so that users feel like they are using the computer, which is actually an exclusive feature.

Since the advent of time-sharing systems, the availability of computers has been greatly improved. The advent of time-sharing system means that computers are increasingly close to our lives.

One more thing to note: the advent of time-sharing systems led to the creation of human-computer interaction languages like BASIC.

The emergence of time-sharing systems coincided with the emergence of facilitator computer networks.

Computer communication

In a time-sharing system, each terminal is connected to a computer, and this exclusive mode of communication is not between computers, because each person is still using the computer independently.

In the 1970s, computer performance had a rapid development, while the volume has become smaller and smaller, the threshold to use the computer has become lower, more and more users can use the computer.

No computer is an island of information that drives the emergence and development of computer networks.

The birth of computer networks

In the 1980s, a network was born that could interconnect many kinds of computers. It can connect all kinds of computers, from large supercomputers or mainframes to small computers.

In the 1990s, one-man, one-machine environments became a reality, but they were still expensive to build. At the same time, information transmission methods such as E-mail and the World Wide Web mushroomed into unprecedented developments, making the Internet ubiquitous from large companies to small families.

The rapid development of computer network

Nowadays, more and more terminal devices are connected to the Internet, which makes the Internet experience an unprecedented climax. In recent years, the development of 3G, 4G and 5G communication technology is the product of the rapid development of the Internet.

Many network technologies with different development paths are also converging to the Internet. For example, the telephone network, which has been the communications infrastructure that supports the communications network. With the development of the Internet, its status has been replaced by the Internet Protocol (IP) network over time, which is also a product of the Development of the Internet.

Network security

Just as the Internet also has two sides, the emergence of the Internet is convenient for users, but also convenient for some lawbreakers. The convenience of the Internet has also brought some negative effects, computer virus infringement, information leakage, network fraud emerge in endlessly.

In real life, we usually fight back when we are beaten, but on the Internet, when you are attacked by criminals, you are usually powerless to fight back, you can only defend, because fighting back requires you to be proficient in computers and the Internet, which is usually not possible for many people.

Generally, companies and enterprises are easy to be taken as the object of profit by criminals. Therefore, in order to avoid or defend against attacks, companies or enterprises need to establish a secure Internet connection.

Internet protocol

The term agreement is not only limited to the scope of the Internet, but also reflected in daily life. For example, couples agree to have dinner at a certain place. This agreement is also a kind of agreement. Note that an agreement between one person and oneself cannot be an agreement. The prerequisite of an agreement must be a multi-party agreement.

So what is the network protocol?

Network protocols are the norms for transmitting and managing information in networks (including the Internet). Just as people need to follow certain rules to communicate with each other, computers need to follow certain rules to communicate with each other, these rules are called network protocols.

The Internet without network protocol is chaotic, just like the human society, people can not want to do what they like, your behavior is constrained by the law; Then the end system in the Internet can not send what they want to send, but also need to be constrained by the communication protocol.

We are generally familiar with the HTTP protocol, which is a convention and specification for transferring hypertext data, such as text, pictures, audio and video, between two points in the computer world

But the Internet is not only HTTP, it has many other protocols such as IP, TCP, UDP, DNS and so on. The following is a summary and introduction of some protocols

Network architecture agreement The main purpose
TCP/IP HTTP, SMTP, TELNET, IP, ICMP, TCP, UDP, etc Mainly used for Internet and LAN
IPX/SPX IPX, NPC, SPX Mainly used for PERSONAL computer LAN
AppleTalk AEP, ADP, DDP Apple’s existing products are connected

Before establishing the standardized OSI, ISO has fully discussed the problems related to network architecture, and finally put forward the OSI reference model as the design index of communication protocol. This model divides the necessary functions in communication protocols into seven layers. These seven layers simplify complex protocols.

In the OSI standard model, each layer of protocol receives specific services provided by the layer below it and is responsible for providing services to the layer above it. There are usually open interfaces between the upper layer and the lower layer, and the conventions for interactions between the same layer are called protocols.

OSI standard model

The figure above simply introduces the communication specifications between one layer and one layer, and the communication specifications between the upper layer and the lower layer. It does not introduce the specific layer of network protocol. In fact, the OSI standard model divides the complex protocol into seven layers that are easy to understand. As shown in the figure below

Internet communication protocols correspond to one of the seven layers. From this point, we can understand the role of protocols in the whole network model. Generally speaking, the main role of each layer is as follows

  • The application layerThe application layer is the top layer of the OSI standard model and provides services directly to application processes. Its function is to realize the communication between multiple system application processes and complete a series of services required for business processing. These protocols include file transfer, email remote login, and remote interface invocation.
  • The presentation layer: The presentation layer serves the application process upwards and receives the services provided by the session layer downwards. The presentation layer is located at the sixth layer of the OSI standard model. The main function of the presentation layer is to convert the inherent data format of the device to the network standard transmission format.
  • The session layerThe session layer, located at layer 5 of the OSI standard model, is built on top of the transport layer and uses services provided by the transport layer to establish and maintain sessions.
  • The transport layerThe transport layer is the fourth layer of the OSI standard model and plays an important role in the whole OSI Standard model. The transport layer involves data transmission between two nodes and provides reliable data transmission service to the upper layer. Generally, the service of the transport layer has to go through three stages: transmission connection establishment stage, data transmission stage and transmission connection release stage before completing a complete service process.
  • The network layerThe network layer is located in the third layer of the OSI standard model. It is located in the middle of the transport layer and the data link layer. It manages to transfer data from the source end to the other end through several intermediate nodes, thus providing the most basic end-to-end data transfer service to the transport layer.
  • Data link layerThe data link layer is located between the physical layer and the network layer. The data link layer defines how data is transmitted over a single link.
  • The physical layer: The physical layer is the lowest layer in the OSI standard model. The physical layer is the foundation of the entire OSI protocol, just like the foundation of a house. The physical layer provides the transmission media and interconnection equipment for data communication between devices, and provides a reliable environment for data transmission.

TCP/IP protocol suite

TCP/IP protocol is the protocol we programmers contact the most, in fact, TCP/IP is also known as TCP/IP protocol cluster, it does not specifically refer to pure TCP and IP protocol, but to accommodate many network protocols.

The OSI model has seven layers, including the physical layer, data link layer, network layer, transport layer, session layer, presentation layer and application layer. But this is obviously a bit complicated, so in TCP/IP they are reduced to four levels

The main differences with OSI layer 7 network protocols are as follows

  • The services provided by the application layer, presentation layer, and session layer are not very different, so in TCP/IP, they are combined into one application layer.
  • Because the content of the data link layer and the physical layer are similar, in TCP/IP they are merged at the same level as the network interface layer.

Our main research object is the TCP/IP layer 4 protocol.

Cxuan will talk to you about the specific protocols in the TCP/IP protocol cluster

TCP/IP protocol

IP is the Internet Protocol, located at the network layer. IP is the core of the entire TCP/IP protocol family and constitutes the foundation of the Internet. IP can provide data distribution for the transport layer and also assemble data for use by the transport layer. It connects multiple individual networks into one Internet, which can improve network scalability and achieve large-scale network interconnection. The second is to separate the coupling relationship between the top-level network and the low-level network.

The ICMP protocol

ICMP is the Internet Control Message Protocol. ICMP is used to transmit Control messages between IP hosts and routers. ICMP is a protocol at the network layer. When an IP router fails to access a target or forward data packets at the current transmission rate, ICMP automatically sends an ICMP message. In this regard, ICMP can be regarded as an error detection and return mechanism, enabling us to check network status and ensure the accuracy of connections.

ARP protocol

ARP is the Address Resolution Protocol, which can obtain physical addresses based on IP addresses. The host broadcasts the ARP request containing the destination IP address to all hosts on the LAN and receives the return message to determine the physical address. After receiving the message, the physical address and IP address are cached in ARP for a period of time. You can directly query the physical address and IP address in ARP next time.

TCP protocol

TCP is the Transmission Control Protocol, also known as Transmission Control Protocol, which is a connection-oriented, reliable, byte stream based Transmission Protocol. TCP Protocol is located at the transport layer, and TCP Protocol is the core Protocol in the TCP/IP Protocol cluster. Its greatest feature is to provide reliable data delivery.

The main features of TCP are slow start, congestion control, fast retransmission, and recovery.

UDP protocol.

UDP is also a transport layer Protocol. Compared with TCP, UDP provides unreliable data delivery. In other words, UDP does not guarantee whether the data reaches the target node. After a packet is sent, it is impossible to know whether it arrived safely and intact. UDP is a connectionless protocol. Before data transmission, the source end does not need to establish a connection with the terminal. The source end does not need to check or modify the data packet, and does not need to wait for the reply from the other end. However, UDP has better real-time performance and higher work efficiency than TCP.

The FTP protocol

FTP is a File Transfer Protocol. It is one of the application layer protocols and an important component of TCP/IP. The FTP Protocol is divided into two parts: server and client. The FTP client is used to access files on the FTP server. The FTP transfer efficiency is high. Therefore, FTP is generally used to transfer large files.

DNS protocol

The DNS protocol is the Domain Name System protocol, which is also one of the application layer protocols. The DNS protocol is a distributed database System that maps Domain names and IP addresses. DNS cache can speed up access to network resources.

The SMTP protocol

SMTP is the Simple Mail Transfer Protocol, one of the application layer protocols. SMTP is used as a Mail sending and receiving Protocol. SMTP server is a Mail sending server that follows THE SMTP Protocol. Used to send or forward E-mail messages sent by users

SLIP agreement

SLIP Protocol refers to Serial Line Internet Protocol, which is a point-to-point link-layer communication Protocol supporting TCP/IP Protocol on Serial communication lines.

The PPP protocol

PPP is the Point to Point Protocol, which is a link layer Protocol designed to transmit data packets between equivalent units. It is designed to send data through dial-up or dedicated point-to-point connections, making it a common solution for simple connections between hosts, Bridges and routers.

Core Network Concepts

transport

Networks can be classified according to the transmission mode, generally divided into two types of connection-oriented and connectionless.

  • In connection-oriented mode, a communication line needs to be established between hosts before data can be sent.
  • Connectionless does not require establishing and disconnecting, and the sender can be used to send data at any time. The receiver does not know when or where it is receiving data.

Packet switching

In Internet applications, each terminal system can exchange information with each other, which is also called Message. A Message is an aggregate, which can contain anything you want, such as text, data, email, audio, video and so on. To send Packets from the source destination to the end system, the long Packets are divided into small data blocks, called Packets. That is, Packets are composed of small Packets. Between the end system and the destination, each packet must pass through communication links and switch packets. It takes a certain amount of time for the packet to interact with the end system. If L bits need to be exchanged between the two end systems, The transmission rate of the link is R bits per second, so the transmission time is L/R seconds.

Does one end system need to send packets to other end systems through the switch, and when the packets arrive at the switch, can the switch directly forward them? No, the switch is not so selfless, you want me to help you forward groups? Ok, first of all, you need to send the whole packet data to me, and THEN I will consider the problem of sending to you, which is store-and-forward transmission

Store-and-forward transmission

In a store-and-forward transfer, the switch must accept the entire packet before it forwards the first bit of the packet. The following is a schematic diagram of the store-and-forward transfer, which can be seen from the diagram

As can be seen from the figure, packets 1, 2, and 3 transmit packets to the switch, and the switch has received the bits sent by packet 1. Will the switch directly forward the packets? The answer is no, the switch will cache your group locally first. This is the same as cheating in an exam. A student with good grades passes through A student with bad grades to pass the answers to B student with bad grades. A student with bad grades says, after A student with bad grades receives the answers, is it possible for A student with bad grades to pass the exam directly? The poor student A said, LET me copy the answers first and then give you the paper.

Queuing delay and packet loss

What? Do you think a switch can only connect to one communication link? You are totally wrong, this is a switch ah, how can there be only one communication link?

So I’m sure you can imagine that multiple end systems send packets to the switch at the same time, there must be sequential arrival and queuing problems. In fact, for each connected link, the packet switch has an output buffer and an output queue that stores the packets the router is prepared to send to each link. If arrive group found that the router is receiving other group, so the new grouping of arrival will be queued in the output queue, this kind of waiting for packet forwarding is also called queuing delay and the amount of time mentioned above packet switch when forwarding group to wait for, waiting for what is known as the store-and-forward delay, so we now understand that there are two kinds of time delay, But there are actually four kinds of delays. These delays are not static, and their variability depends on the degree of congestion in the network.

Because queues have capacity limits, when multiple links send packets at the same time, the output cache cannot accept excessive packets, and the packets will be lost, which is called packet loss. The arrived packets or queued packets will be discarded.

The following figure illustrates a simple packet-switched network

In the figure above, groups are represented by three-digit data tablets, the width of which indicates the size of the grouped data. All packets have the same width and therefore the same packet size. Here is A scenario simulation: Suppose hosts A and B want to send packets to host E. Hosts A and B first send their packets to the first router over A 100 Mbps Ethernet link, and the router directs these packets to A 15 Mbps link. If the rate of packets arriving at the router (translated to bits per second) exceeds 15 Mbps over a short time interval, congestion occurs on the router before packets are queued in the link output buffer and then transmitted to the link. For example, if hosts A and B send five packets back to back at the same time, most of these packets will spend some time waiting in the queue. In fact, this situation is very similar to many ordinary situations, such as when we are waiting in line for a bank teller or in front of a toll booth.

Forwarding table and router selection protocol

As we have just said, the router is connected to multiple communication lines. If each communication link sends packets at the same time, it may cause queuing and packet loss, and then the packets are waiting to be sent in the queue. Now I have a question for you: Where are the packets sent in the queue? By what mechanism is this determined?

Look at it another way. What does routing do? To store and forward data packets in different end systems. On the Internet, each end system has an IP address, and when the original host sends a packet, the IP address of the original host is added to the header of the packet. Each router has a forwarding table. When a packet arrives at the router, the router checks part of the destination address of the packet and searches the forwarding table with the destination address to find out the appropriate transmission link, which is then mapped into an output link for forwarding.

How is the forwarding table set up inside a router? We’ll get to the details later, but just to give you an idea, there’s a routing protocol inside the router that automatically sets up the forwarding table.

Circuit switching

Another way to transmit data over network links and routes in computer networks is circuit switching. Circuit switching differs from packet switching in resource reservation. What does that mean? That is, packet switching does not reserve the cache and link transmission rate of each interaction group between end systems, so queueing transmission will be carried out each time. Circuit switching reserves this information. A simple example will help you understand: it’s like having two restaurants. Restaurant A requires reservations and restaurant B does not. For restaurant A, which can be reserved, we have to contact it in advance, but when we arrive at our destination, we can be seated and choose food immediately. For restaurants that don’t require reservations, you may not need to contact them in advance, but you must risk waiting in line when you arrive at your destination.

A circuit switched network is shown below

In this network, four links are used for four circuit switches. Each of these links has four circuits, so each link can support four parallel links. Each host is directly connected to a switch. When the two hosts need to communicate, the network creates a dedicated end-to-end connection between them.

Packet switching versus circuit switching

Supporters of packet switching often say that packet switching is not suitable for real-time services because of its unpredictable end-to-end latency. Supporters of packet switching argue that packet switching provides better bandwidth sharing than circuit switching. It is simpler, more efficient and cheaper to implement than circuit switching. But the trend is more towards packet switching.

Delay, packet loss and throughput in packet switched networks

The Internet can be thought of as an infrastructure that provides services for distributed applications running on end systems. We want to pass data between any two systems in a computer network without data loss, but this is a very high goal and difficult to achieve in practice. Therefore, in practice, throughput between end systems must be limited to control data loss. There is no guarantee that packet loss will not occur if delay is introduced between end systems. So let’s look at the computer network from the three levels of delay, packet loss and throughput

Delay in packet switching

A packet in a computer network starts at one host (source), travels through a series of routers, and ends its journey in another end system. In the whole transmission process, grouping involves four main delay types: Nodal processing delay, queuing delay, Total Nodal delay, and Propagation delay. These four types of delay add up to total nodal delay.

If dproc dqueue dtrans DPOP is used to represent the processing delay, queuing delay, transmission delay and propagation delay respectively, the total delay of the node is determined by the following formula: dnodal = dPROC + dqueue + dtrans + DPOP.

Type of delay

The following is a typical delay distribution diagram. Let’s analyze the different delay types from the diagram

Packets are transmitted from the end system through the communication link to router A, which checks the packet header to map the appropriate transport link and sends the packets to that link. A packet can be transmitted on a link only when no other packet is being transmitted on the link and no other packet is in front of the link. If the link is busy or another group is in front of the link, the new group is added to the queue. Let’s discuss these four delays separately

Nodal processing delay

The node processing delay is divided into two parts. The first part is that the router checks the packet head information. The second part is the time needed to decide which communication link to transmit the packet to. Generally, the node processing delay of high-speed network is in the order of subtlety and lower. After the processing delay is complete, the grouping is sent to the router’s forwarding queue

Queuing delay

In the queuing forwarding process, packets need to wait in the queue to be sent, and the time consumed by packets in the waiting process is called queuing delay. The length of the queue delay depends on the number of packets that arrive in the queue before the packet. If the queue is empty and no packets are currently being transmitted, the queue delay for the packet is 0. If the network is in the high frequency period, there are more packets transmitted in the link, and the queuing delay of packets will be prolonged. The actual queuing delay can also reach the level of microseconds.

Transmission delay

Queues are the primary data structures used by routers. The characteristics of the queue is first in, first out, first to the canteen to eat. Transmission delay is the theoretical time taken to transmit bits per unit time. For example, if the packet length is L bits, R represents the transmission rate from router A to Router B. So the transmission delay is L over R. This is how long it takes to push all the packets to the link. Even in this case, the transmission delay is usually on the millisecond to subtle level

Propagation delay

The time required from the start of the link to the propagation of router B is the propagation delay. The bit is propagated at the rate of the link. The transmission rate depends on the physical medium of the link (twisted pair, coaxial cable, and optical fiber). If you use the formula, the propagation delay is equal to the distance between the two routers/propagation rate. That is, the propagation rate is D /s, where D is the distance between router A and router B, and S is the propagation rate of the link.

Comparison of transmission delay and propagation delay

Transmission delay and propagation delay in computer networks are sometimes difficult to distinguish. To explain, transmission delay is the time required by routers to push out packets. It is a function of packet length and link transmission rate, and has nothing to do with the distance between two routers. The propagation delay is the time it takes for a bit to travel from one router to another. It is the reciprocal of the distance between two routers and has nothing to do with packet length and link transmission rate. It can also be seen from the formula that the transmission delay is L/R, that is, the length of the packet/the transmission rate between routers. The formula for propagation delay is D /s, which is the distance between routers divided by propagation rate.

Queuing delay

Of the four types of delay, perhaps the most interesting delay is the dqueue. Unlike the other three delays (DPROC, DTRANS, AND DPOP), queuing delays may be different for different groupings. For example, if 10 packets arrive at a queue at the same time, the first packet to arrive at the queue has no queue delay, while the last packet to arrive suffers the maximum queue delay (waiting for the other nine delays to be transmitted).

So how do you describe the queuing delay? You might consider three aspects: the rate at which the traffic reaches the queue, the transmission rate of the link, and the nature of the incoming traffic. That is, whether the traffic arrives periodically or suddenly. If a represents the average rate at which a packet reaches the queue (a is in packets per second, that is, PKT /s), we have said that R represents the transmission rate, so we can deduce the rate of bits from the queue (in BPS, that is, B /s bits). Assuming that all the packets are made up of L bits, the average rate at which bits reach the queue is La BPS. The ratio, La/R, is called traffic intensity. If La/R is greater than 1, the average rate at which bits reach the queue exceeds the rate at which they are transmitted out of the queue. In this case, the queue tends to grow indefinitely. Therefore, the flow intensity should not be greater than 1 when designing the system.

Now consider La/R <= 1. The nature of traffic arrival will affect the queuing delay. If the traffic arrives periodically, that is, every L/R seconds, each packet will arrive in an empty queue with no queueing delay. If traffic arrives suddenly, there may be a large average queue delay. Generally, the following figure can be used to represent the relationship between average queuing delay and traffic intensity

The horizontal axis is La/R flow intensity, and the vertical axis is average queuing delay.

Packet loss

In the above discussion, we described a formula, that is, La/R cannot be greater than 1. If La/R is greater than 1, the arrival queue will be infinite, and the queue in the router can contain only a limited number of groups. Therefore, when the queue in the router is full, the newly arrived groups cannot be accommodated. The router drops the packet, that is, the sub-group is lost.

Throughput in a computer network

In addition to packet loss and delay, another crucial performance measure for a computer is end-to-end throughput. If A large file is sent from host A to host B, the rate at which host B receives the file at any time is instantaneous throughput. If the file is composed of F bits and it takes T seconds for host B to receive all F bits, the average throughput of the file is F/T BPS.

Unicast, broadcast, multicast and anycast

In network communication, communication can be classified according to the number of target addresses, which can be divided into unicast, broadcast, multicast and anycast

Unicast (Unicast)

The biggest feature of unicast is 1-to-1. The early fixed-line telephone is an example of unicast, as shown in the following diagram

Radio (Broadcast)

We used to do broadcast gymnastics when we were kids, and this is an example of broadcast, where the host is connected to all the end systems that it’s connected to, and the host sends signals to all the end systems.

Multicast (Multicast)

Multicast is similar to broadcast in that it sends messages to multiple receiving hosts, except that it is limited to a single group of hosts.

President of seeding (Anycast)

Anycast is a communication method in which a receiver is selected from a specified number of hosts. Although similar to multicast, but different from multicast in behavior, anycast selects a host that best meets network conditions as the target host to send messages. The selected particular host will then return a unicast signal before communicating with the target host.

Physical media

The transmission of network needs medium. A bit packet travels from one end system through a series of links and routers to another end system. This bit will be forwarded many times, so the media that this bit crosses in the process of transmission is called physical medium. There are many kinds of physical media, such as twisted-pair copper wire, coaxial cable, multi-mode fiber optic olive, land radio spectrum and satellite radio spectrum. In fact, there are roughly two types: guiding media and non-guiding media.

Twisted-pair copper wire

The cheapest and most commonly used guided transmission medium is twisted-pair copper wire, which has been used on telephone networks for many years. More than 99% of the connection from the telephone to the local telephone exchange is made up of twisted-pair copper wire, as shown below

Twisted-pair copper wire consists of two insulated copper wires, each about 1cm thick, arranged in a regular spiral shape. Usually, many twisted-pair cables are tied together to form a cable, and a protective layer is covered on the outside of the twisted-pair filling. A pair of cables form a communication link. Unshielded twisted pair cables are commonly used in local area networks (Lans).

Coaxial cable

Like twisted-pair, coaxial cable is made up of two copper conductors, as shown below

With the help of this structure and special insulation and protection layer, coaxial cable can achieve higher transmission rate, coaxial cable is widely used in cable TV system. Coaxial cables are often user-guided shared media.

Optical fiber

An optical fiber is a thin, flexible medium capable of guiding pulses of light, each pulse representing a bit. A single optical fiber can support extremely high bit rates, up to tens or even hundreds of Gbps. They are not subject to electromagnetic interference. Optical fiber is a guided physical medium. The following is a physical picture of optical fiber

Fiber optics are used extensively in the backbone of the Internet.

Terrestrial radio channel

Radio channels carry signals in the electromagnetic spectrum. It requires no physical wiring and has the ability to penetrate walls, provide connectivity to mobile users and carry signals over long distances.

Satellite radio channel

A satellite channel connects two or more Microblog transmitters/receivers on Earth, called ground stations. Two types of satellites are often used in communications: synchronous satellites and near-Earth satellites.

Afterword.

This is the first article of computer network, but also belongs to the basic pre-knowledge, will continue to update the content of computer network.

If the article is good, I hope friends can like, look at, leave a message, share, this is the best white piao.

In addition, I have exported six PDFS. The complete PDF is as follows.

Link: pan.baidu.com/s/1mYAeS9hI… Password: p9rs