Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

First, what is TC?

Traffic Control (TC) is a traffic control tool in Linux. It is a network scene simulation by controlling NETEM. This tool takes effect directly on physical nics. If logical nics are used, this control is invalid. If the VM is in use, the visible virtual NIC is a physical NIC.

What is HTB?

HTB is short for Hierarchy Token Bucket. It implements a rich system of connection sharing categories through practical improvements. HTB makes it easy to guarantee bandwidth for each class, although it also allows certain classes to break the bandwidth ceiling and occupy the bandwidth of other classes. HTB uses the Token Bucket Filter (TBF) to limit bandwidth and assign priority to categories.

Three, TC use steps

To configure traffic control for a NIC, perform the following steps:

  1. Configure a queue for the network card
  2. Establish a classification on the queue
  3. Establish subqueues and subclassifications as required
  4. Build filters for each category
  5. Set up a routing table that works with the filter

Four, basic operation

1. Normal network

Use iPERF to generate traffic

Server side:

iperf - s
Copy the code

The Client side:

Iperf -c 172.17.211.143 -p 5001 -I 2 -p 5Copy the code

Server result:

[ 12]  0.0-10.1 sec  95.2 MBytes  79.3 Mbits/sec
[ 10]  0.0-10.1 sec   158 MBytes   131 Mbits/sec
[  6]  0.0-10.1 sec   116 MBytes  95.7 Mbits/sec
[  9]  0.0-10.2 sec   143 MBytes   118 Mbits/sec
[  4]  0.0-10.3 sec   183 MBytes   150 Mbits/sec
[  7]  0.0-10.3 sec   117 MBytes  96.0 Mbits/sec
[ 11]  0.0-10.3 sec   156 MBytes   127 Mbits/sec
[ 14]  0.0-10.3 sec   138 MBytes   113 Mbits/sec
[  8]  0.0-10.3 sec   136 MBytes   111 Mbits/sec
[  5]  0.0-10.3 sec   162 MBytes   132 Mbits/sec
[SUM]  0.0-10.3 sec  1.37 GBytes  1.14 Gbits/sec
Copy the code

The Client side:

[ID] Interval Transfer Bandwidth [6] 0.0-2.0 SEC 88.9 MBytes 373 Mbits/ SEC [4] 0.0-2.0 SEC 85.8 MBytes 360 Mbits/ SEC [5] 0.0-2.0 SEC 68.2 MBytes 286 Mbits/ SEC [7] 0.0-2.0 SEC 52.2 MBytes 219 Mbits/ SEC [3] 0.0-2.0 SEC 92.5 MBytes 388 Mbits/ SEC [SUM] 0.0-2.0 SEC 388 MBytes 1.63 Gbits/ SEC [4] 2.0-4.0 SEC 62.4 MBytes 262 Mbits/ SEC [6] 2.0- 4.0 SEC 48.9 MBytes 205 Mbits/ SEC [5] 2.0-4.0 SEC 27.4 MBytes 115 Mbits/ SEC [3] 2.0-4.0 SEC 68.9 MBytes 289 Mbits/ SEC [7] 2.0-4.0 SEC 44.6 MBytes 187 Mbits/ SEC [SUM] 2.0-4.0 SEC 252 MBytes 1.06 Gbits/ SEC [3] 4.0-6.0 SEC 45.5 MBytes 191 Mbits/ SEC [5] 4.0-6.0 SEC 30.0 MBytes 126 Mbits/ SEC [4] 4.0-6.0 SEC 54.8 MBytes 230 Mbits/ SEC [6] 4.0-6.0 SEC 69.4 MBytes 291 Mbits/ SEC [7] 4.0-6.0 SEC 53.1 MBytes 223 Mbits/ SEC [SUM] 4.0-6.0 SEC 253 MBytes 1.06 Gbits/ SEC [4] 6.0-8.0 SEC 40.4 MBytes 169 Mbits/ SEC [6] 6.0-8.0 SEC 25.6 MBytes 107 Mbits/ SEC [7] 6.0-8.0 SEC 76.1 MBytes 319 Mbits/ SEC [3] 6.0-8.0 SEC 59.1 MBytes 248 Mbits/ SEC [5] 6.0-8.0 SEC 38.2 MBytes 160 Mbits/ SEC [SUM] 6.0- 8.0 SEC 240 MBytes 1.00 Gbits/ SEC [6] 8.0-10.0 SEC 37.8 MBytes 158 Mbits/ SEC [6] 0.0-10.0 SEC 270 MBytes 227 Mbits/ SEC [4] 8.0-10.0 SEC 39.9 MBytes 167 Mbits/ SEC [4] 0.0-10.1 SEC 283 MBytes 234 Mbits/ SEC [5] 8.0-10.0 SEC 40.8 MBytes 171 Mbits/ SEC [5] 0.0-10.1 SEC 205 MBytes 169 Mbits/ SEC [7] 8.0-10.0 SEC 48.0 MBytes 201 Mbits/ SEC [7] 0.0-10.1 SEC 274 MBytes 227 Mbits/ SEC [3] 8.0-10.0 SEC 84.8 MBytes 355 Mbits/ SEC [SUM] 8.0-10.0 SEC 251 MBytes 1.05 Gbits/ SEC [3] 0.0-10.2 SEC 351 MBytes 289 Mbits/ SEC [SUM] 0.0-10.2 SEC 1.35 GBytes 1.14 Gbits/ SECCopy the code

I’ve done it multiple times, and it’s similar. Five threads add up to about 1Gbits per second.

2. Simulate network packet loss

Simulation command:

tc qdisc add dev eth0 root netem loss 10%
Copy the code

Relation between packet loss rate and bandwidth:

3. Simulate network delay

Simulation command:

tc qdisc add dev eth0 root netem delay 100ms
Copy the code

Relationship between latency and bandwidth:

4. Hierarchical restriction of HTB queues

Category configuration:

tc qdisc add dev eth0 root handle 1: htb default 2

tc class add dev eth0 parent 1: classid 1:1 htb rate 100Mbps ceil 100Mbps
tc class add dev eth0 parent 1:1 classid 1:2 htb rate 20Mbps ceil 20Mbps
tc class add dev eth0 parent 1:1 classid 1:3 htb rate 50Mbps ceil 50Mbps
tc class add dev eth0 parent 1:1 classid 1:4 htb rate 20Mbps ceil 20Mbps

tc filter add dev eth0 parent 1:0 protocol ip prio 100 route
tc filter add dev eth0 parent 1:0 protocol ip prio 100 route to 2 flowid 1:2
tc filter add dev eth0 parent 1:0 protocol ip prio 100 route to 3 flowid 1:3
tc filter add dev eth0 parent 1:0 protocol ip prio 100 route to 4 flowid 1:4

ip route add 172.17.211.144 dev eth0 via 172.17.211.143 realm 2

[root@7dgroup ~]# tc -s class ls dev eth0class htb 1:1 root rate 800000Kbit ceil 800000Kbit burst 1600b cburst 1600b Sent 1350897 bytes 6146 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 lended: 0 borrowed: 0 giants: 0 tokens: 234 ctokens: 234 class htb 1:2 parent 1:1 prio 0 rate 160000Kbit ceil 160000Kbit burst 1600b cburst 1600b Sent 1350897 bytes 6146 pkt  (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 lended: 5850 borrowed: 0 giants: 0 tokens: 1170 ctokens: 1170 class htb 1:3 parent 1:1 prio 0 rate 400000Kbit ceil 400000Kbit burst 1600b cburst 1600b Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 lended: 0 borrowed: 0 giants: 0 tokens: 500 ctokens: 500 class htb 1:4 parent 1:1 prio 0 rate 160000Kbit ceil 160000Kbit burst 1600b cburst 1600b Sent 0 bytes 0 pkt (dropped  0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 lended: 0 borrowed: 0 giants: 0 tokens: 1250 ctokens: 1250Copy the code

5. Network traffic limiting effect

Test means from A machine to B machine with iPERF open 5 threads to send data packets.

No restrictions:

[  5]  0.0-10.0 sec   168 MBytes   140 Mbits/sec
[  7]  0.0-10.1 sec  75.2 MBytes  62.4 Mbits/sec
[  8]  0.0-10.2 sec   210 MBytes   172 Mbits/sec
[  4]  0.0-10.3 sec  92.8 MBytes  75.7 Mbits/sec
[  6]  0.0-10.3 sec   158 MBytes   129 Mbits/sec
[SUM]  0.0-10.3 sec   704 MBytes   574 Mbits/sec
Copy the code

Limit of 10 M

[8] 0.0-10.2 SEC 16.6 MBytes 13.7 Mbits/ SEC [4] 0.0-10.2 SEC 16.4 MBytes 13.5 Mbits/ SEC [5] 0.0-10.2 SEC 14.5mbytes 11.9 Mbits/ SEC [6] 0.0-10.2 SEC 25.8 MBytes 21.2 Mbits/ SEC [7] 0.0-10.2 SEC 19.8 MBytes 16.2 Mbits/ SEC [SUM] 0.0-10.2 SEC 93.0 MBytes 76.4 Mbits/ SECCopy the code

Is limited to 20 M

[  5]  0.0-10.1 sec  55.6 MBytes  46.0 Mbits/sec
[  7]  0.0-10.2 sec  28.9 MBytes  23.8 Mbits/sec
[  9]  0.0-10.2 sec  26.1 MBytes  21.6 Mbits/sec
[  4]  0.0-10.2 sec  45.0 MBytes  37.1 Mbits/sec
[  6]  0.0-10.2 sec  29.5 MBytes  24.3 Mbits/sec
[SUM]  0.0-10.2 sec   185 MBytes   153 Mbits/sec
Copy the code

Limit of 30 M:

[  4]  0.0-10.2 sec  53.0 MBytes  43.7 Mbits/sec
[  6]  0.0-10.2 sec  62.0 MBytes  51.1 Mbits/sec
[  8]  0.0-10.2 sec  57.9 MBytes  47.7 Mbits/sec
[  5]  0.0-10.2 sec  58.5 MBytes  48.2 Mbits/sec
[  7]  0.0-10.2 sec  46.4 MBytes  38.2 Mbits/sec
[SUM]  0.0-10.2 sec   278 MBytes   229 Mbits/sec
Copy the code

Limited to 40 M:

[  5]  0.0-10.1 sec  76.6 MBytes  63.5 Mbits/sec
[  9]  0.0-10.1 sec  76.9 MBytes  63.6 Mbits/sec
[  6]  0.0-10.1 sec  72.4 MBytes  59.9 Mbits/sec
[  7]  0.0-10.1 sec  70.6 MBytes  58.5 Mbits/sec
[  4]  0.0-10.1 sec  72.9 MBytes  60.3 Mbits/sec
[SUM]  0.0-10.1 sec   369 MBytes   305 Mbits/sec
Copy the code

Is limited to 50 M:

[  4]  0.0-10.1 sec  89.9 MBytes  74.5 Mbits/sec
[  5]  0.0-10.1 sec  99.6 MBytes  82.5 Mbits/sec
[  8]  0.0-10.1 sec  89.9 MBytes  74.3 Mbits/sec
[  6]  0.0-10.1 sec  91.9 MBytes  76.0 Mbits/sec
[  7]  0.0-10.2 sec  89.8 MBytes  74.1 Mbits/sec
[SUM]  0.0-10.2 sec   461 MBytes   381 Mbits/sec
Copy the code

Limit of 60 M:

[  4]  0.0-10.1 sec   107 MBytes  89.1 Mbits/sec
[  7]  0.0-10.1 sec   121 MBytes   101 Mbits/sec
[  9]  0.0-10.1 sec   108 MBytes  89.3 Mbits/sec
[  5]  0.0-10.1 sec   107 MBytes  89.1 Mbits/sec
[  6]  0.0-10.1 sec   107 MBytes  89.2 Mbits/sec
[SUM]  0.0-10.1 sec   550 MBytes   457 Mbits/sec
Copy the code

Limited to 70 M:

[  8]  0.0-10.1 sec   178 MBytes   148 Mbits/sec
[  7]  0.0-10.1 sec  94.4 MBytes  78.5 Mbits/sec
[  4]  0.0-10.1 sec  95.0 MBytes  78.9 Mbits/sec
[  6]  0.0-10.1 sec  94.6 MBytes  78.6 Mbits/sec
[  5]  0.0-10.1 sec   178 MBytes   148 Mbits/sec
[SUM]  0.0-10.1 sec   640 MBytes   531 Mbits/sec
Copy the code

Limited to 80 m:

[7] 0.0-10.0 SEC 167 MBytes 140 Mbits/ SEC [9] 0.0-10.1 SEC 166 MBytes 137 Mbits/ SEC [4] 0.0-10.2 SEC 99.8 MBytes 82.4 Mbits/ SEC [5] 0.0-10.2 SEC 157 MBytes 129 Mbits/ SEC [6] 0.0-10.2 SEC 110 MBytes 90.2 Mbits/ SEC [SUM] 0.0-10.2 SEC 700  MBytes 574 Mbits/secCopy the code

The limit of 90 M

[  4]  0.0-10.0 sec   220 MBytes   184 Mbits/sec
[  7]  0.0-10.2 sec   124 MBytes   102 Mbits/sec
[  5]  0.0-10.2 sec   104 MBytes  85.2 Mbits/sec
[  8]  0.0-10.2 sec   117 MBytes  96.2 Mbits/sec
[  6]  0.0-10.2 sec   135 MBytes   111 Mbits/sec
[SUM]  0.0-10.2 sec   699 MBytes   573 Mbits/sec
Copy the code

The limit of 100 M

[  4]  0.0-10.1 sec   140 MBytes   116 Mbits/sec
[  7]  0.0-10.1 sec   139 MBytes   116 Mbits/sec
[  6]  0.0-10.1 sec   145 MBytes   121 Mbits/sec
[  5]  0.0-10.1 sec   128 MBytes   106 Mbits/sec
[  9]  0.0-10.1 sec   146 MBytes   121 Mbits/sec
[SUM]  0.0-10.1 sec   698 MBytes   579 Mbits/sec
Copy the code

Five, the summary

There are many ways to simulate network packet loss, latency, and traffic limiting under Linux, and you can explore for yourself.