preface

Having been in the trenches for several years, I remember when I first started, I could only use simple commands, and when writing scripts, they were as simple as possible, so sometimes the scripts were long and smelly.

Some of the more advanced commands, such as Xargs command, pipe command, auto answer command, etc., I could have written a concise and efficient script if I had known.

For whatever reason, I’d like to explain some of the higher-level commands Linux uses, so I can go back to them if I don’t remember.

Xargs command (xargs

In everyday use, I find the xargs command to be important and convenient. We can use this command to pass the output of the command as an argument to another command.

For example, if we want to find the files ending in.conf in a certain path and classify them, the common practice is to find the files ending in.conf and export them to a file, then cat the file and use the file classification command to classify the output files.

This general method is a bit cumbersome, so this is where the xargs command comes in handy. Example 1: find/directory with the conf at the end of the file, and file classified command: # find / – name *. Conf -type f – print | xargs file output results as follows:

Xargs not only allows you to add file classification commands, you can also add many other commands, such as tar, which is a little bit more practical. You can use the find command and tar command to find special files in the specified path, and then use tar command to package the found files directly as follows:

# find / -name *.conf -type f -print | xargs tar cjf test.tar.gz

2. Background running of commands or scripts

Sometimes when we do something, don’t hope we followed after operation in the terminal session is broken is broken, especially some database import and export operation, if involves the operation of the large amount of data, we can’t guarantee that our network during our operation is not a problem, so the background script or command is a guarantee for us.

For example, if we want to run the database export operation in the background and record the command output to a file, we can do this: SQL > databases > databases. / databases. SQL > databases.

Mysql > select database, database, database, database, database, database, database, database, database, database, database, database, database, database, database, database, database, database

Carried out after the order, will be prompted to call you for a password, enter the password, the command is run in the foreground, but our purpose is to run the command, the day after tomorrow at this time you can press Ctrl + Z, and then the input bg can achieve the result of the first command, the background for the command, at the same time also can let the password hidden.

After the command is executed on the background, a nohup.out file is left in the current directory where the command is executed. You can check this file to see whether the command is executed incorrectly.

Find the processes that use the most memory on the current system

In many o&M operations, we find that memory consumption is relatively serious, so how to find the memory consumption process order? Command: # ps – aux | sort – who 4 | head – 20

The fourth column of the output is the percentage of memory consumed. The last column is the corresponding process.

Find the processes that use the most CPU in the current system

In many operations, we find that CPU consumption is very high, so how can we find the CPU consumption process order? Command: # ps – aux | 3 | sort – who head – 20

The third column of the output is the percentage of CPU consumed, and the last column is the corresponding process.

As you can see, sort 3 and 4 represent the sort for column 3 and column 4.

View multiple log or data files at the same time

In our daily work, we might view log files by using the tail command to view log files from terminal to terminal, one log file per terminal. I do, too, but sometimes I find this a bit cumbersome. There is actually a tool called Multitail that allows you to view multiple log files simultaneously on the same terminal.

Install multitail first:

Wget # ftp://ftp.is.co.za/mirror/ftp.rpmforge.net/redhat/el6/en/x86_64/dag/RPMS/multitail-5.2.9-1.el6.rf.x86_64.rpm

# yum -y localinstall multitail-5.2.9-1.el6.rf.x86_64

The Multitail tool supports text highlighting, content filtering, and more that you might need.

Here is a useful example: we want to view both secure log output specified filtering key and real-time network ping:

# multitail -e “Accepted” /var/log/secure -l “ping baidu.com”

Is it convenient? If you want to check the correlation between two logs, you can check whether the log output is triggered. If switching back and forth between two terminals is a bit of a waste of time, the Multitail tool is a good way to check.

6. Ping continuously and record the result in a log

Many times, the operation and maintenance always hear a voice, is there something wrong with the network ah, causing the service to appear strange symptoms, must be the server network problem. This is commonly known as the back of the pot, business problems, the first time the relevant personnel can not find the reason for many cases will be attributed to the server network problems.

This time you ping a few packets to throw out the results, people will refute you, just that period of time there is a problem, now the business is back to normal, the network must be normal ah, this time you are estimated to be angry.

If you take out zabbix and other network monitoring data, this time is not appropriate, zabbix data collection interval can not be set to 1 second, right? Xiaobian encountered such a problem, the result I through the following command for ping monitoring collection.

Then, when someone asked me to take the blame, I intercepted the ping database of the time period when the problem occurred, and we talked openly about it. As a result, I caught the ping database and sent it back. From then on, they did not dare to dump the blame easily.

Command: Ping API. Jpush. Cn | awk ‘{print $0 “” strftime (” % % Y – m – H: % d % % m: % S”, systime ())}’ > > / TMP/jiguang. The log & The output result will be recorded in/TMP /jiguang.log, and a ping record will be added every second as follows:

7. Check the TCP connection status

You can view the TCP connection status of port 80 to check whether the connection is released or analyze the status when an attack occurs.

Command: # netstat – NAT | awk ‘{print $6} | sort | uniq -c | sort – rn

Find the top 20 IP addresses with the most requests for port 80

Sometimes the number of business requests suddenly goes up, so at this time we can check the source IP of the request, if it is concentrated on a few IP, then there may be attack behavior, we can use the firewall to block. The command is as follows:

# netstat -anlp|grep 80|grep tcp|awk ‘{print $5}’|awk -F: ‘{print $1}’|sort|uniq -c|sort -nr|head -n20

SSH implements port forwarding

Many of you have heard that SSH is a Linux remote login security protocol, which is a popular remote login management server. However, few friends have heard that SSH can also do port forwarding. In fact, SSH is used to do port forwarding function is very powerful, the following to do a demonstration.

Example Background: Our company has a Bastion machine, all operations need to be done on the Bastion machine. Some developers need to access the head panel of ELasticSearch to see the cluster status, but we don’t want to map the port 9200 of ELasticSearch to the bastion machine. This is why requests to Fortress machine (192.168.1.15) are forwarded to server ElasticSearch (192.168.1.19) on 9200.

Example: SSH -p 22-c-f-n-g-l 9200:192.168.1.19:9200 SSH -p 22-c-f-n-g-l 9200:192.168.1.19:9200 SSH -p 22-c-F-n-g-l 9200:192.168.1.19:9200 [email protected] Remember: the premise is to transfer the key first.

After the command is executed, 192.168.1.15:9200 port is actually accessed.

subsequent

This time I will first record here, and I will continue to sort out and record when I have time next time.

This article is reprinted from efficient Operations Community

, end,

Long press concern public number