when

I am busy with a big project and have little spare time to write articles.

The test environment for this large project is deployed on a server purchased from Tencent Cloud.

The server is a friend to buy, binding is a friend’s micro signal.

One morning two days ago, my friend suddenly received a security alarm notification from Tencent Cloud.

I went to the console the next day to check the details. The password had been brute force cracked!

First thing, I changed the password.

The previous random set of password, only 8 digits, may be too simple, was only tried more than 1,400 times before being broken.

One lesson I learned here is that servers with public IP addresses must have complex passwords.

Tencent cloud can set a maximum of 30 passwords, so THIS time I set a 30 password.

After the change, I started to check whether the server was injected with Trojan horses, check whether the database was extorted, check whether there are recent new files, check whether there are new ports opened, new users added and so on.

Upon examination, nothing was found (in fact, the records had been erased by hackers). So I restarted the server and ignored it. But something tells me there’s more to it than that.

wit

On the morning of the third day, I deployed the application and found that the CPU usage was 100%!

So I checked the CPU usage.

ps aux|head -1; ps aux|grep -v PID|sort -rn -k +3|headCopy the code

The log is as follows:

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 28338 182 0.0 2437008 2732? Ssl 15:05 147:05 /usr/local/bin/wordpress root 25257 14.5 0.9 764436 36020? Ssl 16:25 0:00 NPM root 23779 0.4 0.3 711348 14752? Sl 14:59 0:23 /usr/bin/ophvyn mongod 1405 0.1 1.2 989616 47732? Sl Mar30 6:28 /usr/bin/mongod -f /etc/mongod. Conf root 9 0.0 0.0 0 0? R Mar30 0:09 [rcu_sched] root 8 0.0 0.0 0? S Mar30 0:00 [rcu_bh] root 789 0.0 0.0 4388 556? Ss Mar30 0:00 /usr/sbin/acpid root 775 0.0 0.0 26380 1740? Ss Mar30 0:00 /usr/lib/systemd/systemd-logind root 748 0.0 0.0 55528 1100? S<sl Mar30 0:05 /sbin/auditd root 7 0.0 0.0 0 0? S Mar30 0:00 [migration/0]Copy the code

A process called wordpress has been using 182 CPUS.

But I have not installed wordpress on the server, it seems that this is a Trojan horse planted by hackers.

Preliminary prediction, I think this is a mining process.

Then I started working on this mining horse.

First kill the thread.

kill28338-9Copy the code

Then delete the wordpress file.

The CPU usage went down immediately, but I didn’t think it was over. They couldn’t do that if they were capable of running my server.

So I began to wait patiently to see if anything else had changed.

Sure enough, a few minutes later, the server’s CPU usage soared again to 100%.

When I looked again at the process with the highest CPU usage, I saw wordpress again. I guess it has a daemon or a scheduled task.

I started looking for daemons.

ps -eo ppid,pid,sid,stat,tty,comm  | awk '{ if ($2 == $3 && $5 == "?" ) {print $0}; } '
Copy the code

The log is as follows:

    0     1     1 Ss   ?        systemd
    1   454   454 Ss   ?        systemd-journal
    1   481   481 Ss   ?        systemd-udevd
    1   484   484 Ss   ?        lvmetad
    1   748   748 S<sl ?        auditd
    1   772   772 Ssl  ?        polkitd
    1   775   775 Ss   ?        systemd-logind
    1   776   776 Ss   ?        dbus-daemon
    1   778   778 Ss   ?        ntpd
    1   783   783 Ss   ?        lsmd
    1   789   789 Ss   ?        acpid
    1  1054  1054 Ss   ?        dhclient
    1  1126  1126 Ssl  ?        tuned
    1  1135  1135 Ss   ?        vsftpd
    1  1249  1249 Ssl  ?        rsyslogd
    1  1255  1255 Ss   ?        atd
    1  1258  1258 Ss   ?        crond
    1  1555  1555 Ss   ?        sshd
 1555  1956  1956 Ss   ?        sshd
 1956  2009  2009 Ss   ?        bash
    1  2380  2380 Ss   ?        nginx
    1  2860  2860 Ssl  ?        node
    1 28338 28338 Ssl  ?        wordpress
 2860 30275 30275 Ssl  ?        node
Copy the code

Sure enough, there is a wordpress daemon and, oddly enough, two Node processes. There is only one Node process running on my server. From the PID point of view, the node process 30275 was started after wordpress.

There must be some relationship between Node and wordpress.

So I killed all three processes and deleted the wordpress file.

After half an hour of observation, the server seemed to be back to normal.

I had a good sleep that day.

The resurgence

This morning, when I opened the server again, I found that the CPU was 100% occupied again!

The hackers were clever enough.

This time I repeat yesterday’s steps and watch the server change.

After a while, I noticed that THE IO fluctuated, and then the CPU went up 100% again.

Then check the port usage.

netstat -nlpt
Copy the code

The log is as follows:

Proto Recv -q Send -q Local Address Foreign Address State PID/Program name TCP 0 0 0.0.0.0:43575 0.0.0.0:* LISTEN 23779/ophvyn TCP 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 1405/mongod TCP 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2380/nginx: Master TCP 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1555/ SSHD tcp6 0 0: :21 ::* LISTEN 1135/ VSFTPDCopy the code

When I saw the port called Ophvyn with a PID of 23779, 43575, it was like seeing the first light of day.

It appears to be downloading trojans from remote servers.

Here is another lesson, configure security groups must be careful, must not open all.

Since this is a test server, all ports were left open for debugging purposes.

The next step is to kill Ophvyn.

Start by looking at ophvyn’s details.

ps 23779
Copy the code

I found out where it was.

PID TTY      STAT   TIME COMMAND
23779 ?        Sl     0:31 /usr/bin/ophvyn
Copy the code

I shut down all related processes and deleted all related files.

Waited a long time, did not appear again what unitary moths.

The world seemed to be quiet.

Start all over again

Everything was as calm as ever, as if nothing had happened.

On the afternoon of the third day, I decided to back up the data on the server, reinstall the system, and kiss the hackers goodbye.

I could have backed up the data and reinstalled the system the day after I got the warning. The reason I didn’t do that was to see what the hackers would do to my server. Now look also look, toss toss toss toss also, finally still choose to reinstall the server.