RabbitMQ is an open source messaging broker, created in 2007 to implement AMQP, and has included HTTP, STOMP, SMTP and other protocols through an ever-growing list of plug-ins over the past twelve years. It is also a strong competitor to Kafka. In today’s article we’ll detail how to use Elastic Stack to monitor RabbitMQ.

 

Introduction of the RabbitMQ

RabbitMQ is message queue software, also known as message broker or queue manager. To put it simply; It is software that defines a queue to which an application connects to transmit one or more messages.

A message can contain any kind of information. For example, it might have information about a process or task that should be started on another application (maybe even on another server), or it might just be a simple text message. Queue manager software stores messages until the receiving application connects and fetches them from the queue. Receive the application and process the message.

The basic architecture of message queues is simple – there are client applications called Producers that create messages and pass them to brokers (message queues). Other applications (called consumers, or consumers) connect to the queue and subscribe to the messages to be processed. Software can act as a producer or consumer of messages, or both. Messages stored in queues are stored until the consumer retrieves them.

Here we’ll show you how to use the Elastic Stack to import RabbitMQ logs into the Elastic Stack and analyze the logs.

 

Which logs of RabbitMQ

It is important to know that from release 3.7.0 on 29 November 2017 RabbitMQ will be logged to a log file. Before that, there are two log files. In this article, I’m using RabbitMQ version 3.8.2, so the contents of a log file will be sent to Elasticsearch. If you are using an earlier version of RabbitMQ, especially prior to 3.7.0, please refer to the documentation for more information on the two different log files.

Also, for the current version of RabbitMQ, you can specify where RabbitMQ will store its log files in rabbitmq.conf, which I will display during installation.

 

configuration

The way we send RabbitMQ logs to the Elastic Stack is by using Filebeat and Logstash. Here is my configuration:

 

On top we put Elasticsearch and Kibana for the Elastic Stack on MacOS, and we put the rest on Ubuntu OS.

The installation

To complete our setup, we do the following installation:

Install Elasticseach

If you don’t already have Elasticsearch installed, follow my tutorial on How to Install Elasticsearch on Linux, MacOS, and Windows to install Elasticsearch. Since our Elastic Stack needs to be accessed by another Ubuntu VM, we need to configure our Elasticsearch. Start by using an editor to open the elasticSearch.yml configuration file in the config directory. We need to change the IP address of network.host. On your Mac and Linux machines, we can use:

$ ifconfig
Copy the code

To see the IP address of our machine. In my case, the IP address of my MacOS machine is: 192.168.43.220.

Above we set network.host to “_site” to indicate that it is bound to the IP address of our local computer. See the network. Host instructions for Elasticsearch for details.

We must also add discovery.type: single-node at the end of elasticSearch. yml to indicate that we are a single node.

After changing our IP address, let’s save elasticSearch.yml. Then re-run our ElasticSearch. We can enter the IP address we just entered in a browser and add port number 9200. Check to see if our ElasticSearch is working properly.

Install Kibana

We can install Kibana as described in “How to Install Kibana in Elastic Stack on Linux, MacOS, and Windows”. Since the IP address of our Elasticsearch has changed, we have to modify our Kibana configuration file. We use our favorite editor to open the kibana.yml file in the config directory and find server.host. Change its value to the IP address of your computer. In my case:

Find elasticSearch. hosts and enter your OWN IP address:

Save our Kibana.yml file and run our Kibana. Enter your IP address and port 5601 in the browser address:

If the configuration is successful, we can see the above screen.

 

Install the RabbitMQ

See “How to Install Latest RabbitMQ Server on Ubuntu 18.04LTS” to install RabbitMQ on Ubuntu OS. I don’t need to go over it here. After installation, the RabbitMQ service will be started and enabled to start on startup. To check the status, run:

sudo systemctl status  rabbitmq-server.service 
Copy the code
$sudo systemctl status rabbitmq-server.service rabbitmq-server.service - Loaded rabbitmq broker: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2020-02-27 16:02:00 CST; 1h 30min ago Main PID: 844 (beam.smp) Status: "Initialized" Tasks: 127 (limit: 4915) CGroup: /system.slice/rabbitmq-server.service ├─ 844 /usr/lib/erlang/erts-10.6.4/bin/beam.smp -w W -A 96 -MBas ageffcbf-mhas Ageffcbf ├ ─ 1011 / usr/lib/Erlang/erts - 10.6.4 / bin/epmd - daemon ├ ─ 1393 erl_child_setup 32768 ├ ─ 1636 inet_gethost 4 └ ─ 1637 Inet_gethost 4 2月 27 16:01:59 liuxg Rabbitmq-Server [844]: Doc guides: https://rabbitmq.com/documentation.html on February 27 16:01:59 liuxg the rabbitmq server - [844] : Support: https://rabbitmq.com/contact.html on February 27 16:01:59 liuxg the rabbitmq server - [844] : Tutorials: https://rabbitmq.com/getstarted.html on February 27 16:01:59 liuxg the rabbitmq server - [844] : Monitoring:...Copy the code

If we see the output above, our Rabbitmq-server has been installed successfully. By default, it will generate the corresponding log file in the directory /var/log/rabbitmq. In order to be able to configure the level of logging and log file name, please make sure that the/etc/rabbitmq/rabbitmq. Set the following values in the conf:

/etc/rabbitmq/rabbitmq.conf
Copy the code
log.dir = /var/log/rabbitmq
log.file = rabbit.log
log.file.level = info 
Copy the code

After this change, our log file name will be rabbit.log and our log level will be set to INFO. Since we have modified our configuration file, we need to restart rabbitmq-server for this configuration to work. We execute the following command:

sudo service rabbitmq-server restart 
Copy the code

Install RabbitMQ Demo

To take advantage of the RabbitMQ setup above, I will demonstrate using the JMS client for RabbitMQ on the Spring Boot App. This can be done using instructions in the README as well as Spring CLI or Java JAR files. In order for you to be able to compile your application, you must have Java 8 installed. We follow the following steps:

git clone https://github.com/rabbitmq/rabbitmq-jms-client-spring-boot-trader-demo
Copy the code

After downloading the above test application, we go to the root directory of the application and type the following command:

sudo rabbitmq-plugins enable rabbitmq_jms_topic_exchange
mvn clean package
java -jar target/rabbit-jms-boot-demo-${VERSION}-SNAPSHOT.jar
Copy the code

Please note that the VERSION section above is based on your own compilation results to fill in. If our application compiles successfully and runs successfully, I can see something like this:

After the configuration above, we can see the desired rabbitMQ log in the following address:

$ pwd
/var/log/rabbitmq
liuxg@liuxg:/var/log/rabbitmq$ ls
log  [email protected]  rabbit@liuxg_upgrade.log  rabbit.log
Copy the code

Where rabbit.log is the file name we just configured.

Next we’ll see how to import these logs into Elasticsearch using Filebeat and Logstash.

 

Install the Logstash

We set up our own Logstash by “How to Install A Logstash in an Elastic stack”. For our Linux installation, we can do the following:

Curl - L - O https://artifacts.elastic.co/downloads/logstash/logstash-7.5.0.tar.gz tar - XZVF logstash - 7.5.0. Tar. Gz CD CD Logstash - 7.5.0Copy the code

I used the basic Grok pattern to separate the timestamp, log level, and message from the original message, and then sent the output to Elasticsearch, specifying the index. I can create a configuration file that belongs to longstash:

logstash_rabbitmq.conf

input { beats { port => 5044 } } filter { grok { match => { "message" => ["%{TIMESTAMP_ISO8601:timestamp} \[%{LOGLEVEL:log_level}\] \<%{DATA:field_misc}\> %{GREEDYDATA:message}"] } } } output { stdout { codec => rubydebug } Elasticsearch {hosts = > "192.168.43.220:9200 index" = > "% {[@ metadata] [beat]} - % {[@ metadata] [version]} - % {+ YYYY. MM. Dd}"} }Copy the code

Above, our input comes from beats port 5044. You will need to change the IP address of your own Elasticsearch. We also added stdout output above for debugging convenience. Whenever a message is processed, it also outputs to the terminal screen.

We can run our logstash as follows:

$PWD/home/liuxg/logstash/logstash - 7.5.0 liuxg @ liuxg: ~ / logstash logstash - 7.5.0 $. / bin/logstash - f ~/rabbitmq/logstash_rabbitmq.confCopy the code

If it works, we can see in Terminal that the logstash has been started properly:

Next, we need to install Filebeat to send our RabbitMQ log to port 5044 of the LogStash stash.

 

Filebeat installation

In our Ubuntu terminal, we type the following command to install Filebeat:

The curl - L - O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.5.0-amd64.deb sudo DPKG -i Filebeat 7.5.0 - amd64. DebCopy the code

We modify filebeat in the/etc/filebeat/filebeat yml configuration file:

filebeat.yml

filebeat.inputs:

- type: log
  fields:
    log_type: rabbitmq-server
  paths:
    - /var/log/rabbitmq/*log
  fields_under_root: true
  encoding: utf-8
  ignore_older: 3h

registry_file: /var/lib/filebeat/registry

output:
  logstash:
    hosts: ["localhost:5044"]
Copy the code

Let’s start our FileBeat service:

sudo service filebeat start
Copy the code

So our FileBeat service gets started. To check whether fileBeat is successfully started, run the following command:

 sudo systemctl filebeat status
Copy the code
Sudo systemctl status fileBeat ● filebeat.service - filebeat sends log files to Logstash or directly to Elasticsearch. Loaded: loaded (/lib/systemd/system/filebeat.service; disabled; vendor preset: enabled) Active: active (running) since Thu 2020-02-27 19:59:03 CST; 34s ago Docs: https://www.elastic.co/products/beats/filebeat Main PID: 21028 (filebeat) Tasks: 15 (limit: 4915) CGroup: / system. Slice/filebeat service └ ─ 21028 / usr/share/filebeat/bin/filebeat - e - c/etc/filebeat/filebeat yml - path. Home/usr 2月 27 19:59:04 Liuxg FileBeat [21028]: 2020-02-27T19:59:04.132+0800 INFO Registrar /reg 2月 27 19:59:04 liuxg FileBeat [21028]: Registrar/Reg 2月 27 19:59:04 liuxg FileBeat [21028]: 2020-02-27T19:59:04.132+0800 WARN Beater /filebe 2月 27 19:59:04 Liuxg FileBeat [21028]: Liuxg Filebeat [2020-02-27T19:59:04.132+0800 INFO Crawler /crawl 2月 27 19:59:04 Liuxg filebeat[21028]: 2020-02-27T19:59:04.133+0800 INFO log/input.go: 2月 27 19:59:04 Liuxg FileBeat [21028]: 2020-02-27T19:59:04.93 +0800 INFO input/input.g 2月 27 19:59:04 Liuxg FileBeat [21028]: Crawl 2月 27 19:59:04 Liuxg filebeat[2020-02-27T19:59:04.04]: 2020-02-27T19:59:04.134+0800 INFO log/ Harvester 2月 27 19:59:04 Liuxg FileBeat [21028]: 2020-02-27T19:59:04.153+0800 INFO Pipeline/Outp 2月 27 19:59:04 FileBeat [21028]: 2020-02-27T19:59:04.158+0800 INFO Pipeline/OutP 2月 27 19:59:34 Liuxg FileBeat [21028]: The 2020-02-27 T19:59:34. 133 + 0800 INFO (monitoring)Copy the code

It shows that our FileBeat has been successfully launched. At this point in our logstash window you can see the following output:

It shows us that our longstash is already processing information sent from FileBeat.

 

Install Metricbeat

In this section, we will show you how to monitor rabbitMQ metrics.

First open our Kibana:

Select “Add Metric Data” :

Next, we select “RabbitMQ Metrics” :

Follow the instructions above to install MetricBeat and configure MetricBeat. In the configuration/etc/metricbeat/metricbeat yml file, we must pay attention to is:

output.elasticsearch:
  hosts: ["<es_url>"]
  username: "elastic"
  password: "<password>"
setup.kibana:
  host: "<kibana_url>"
Copy the code

Fill in our ELASticSearch IP address. In my case, the address is 192.168.43.220:

Once we have MetricBeat installed, we click “Check Data” :

Once we see data coming in, our MetricBeat is working.

 

View log documents in Kibana

Select Discover in our Kibana:

From the above we can see that we already have rabbitMQ logs.

We can also select metricBeat index pattern, then we can see:

You can see the rabbitMQ metrics above.

reference

【 1 】 computingforgeeks.com/how-to-inst…

(2) www.cloudamqp.com/blog/2015-0…