Red, angry

Angry, with the stop, raining rain rest. Look up, look up to the sky long roar, zhuang Huai intense. Thirty fame and dust, eight thousand li cloud and moon. Mo, white young head, empty sad. Jingkang Shame, still not snow; I hate, when out? Drive long car, set foot on helan Mountain lack. Ambition hunger meal Hurupp meat, thirsty drink xiongnu blood. Wait from the beginning, clean up the old mountains and rivers, toward the sky.Copy the code

background

Recently, the project needs to deploy ELK using Docker. Because our logs are generated on the client side, considering security issues, we provide two interfaces on the back end, one for real-time log information push, and the other for log file upload, as the logstash input

I originally thought it was very simple, but I am a temporary front-end employee, unfamiliar with Docker, Java, Linux and ELK, which took me a long time. Now I have recorded that if there is anything wrong in the article, please kindly comment

The business process

The client generates logs and pushes them to the back-end server in two ways

  • Online status, directly pushed through the interface
  • In offline state, logs are persisted locally and pushed to the backend by uploading files until the next Internet connection

The server provides the following two interfaces

  • Real-time log push interface: 1. Identity authentication. 2. Data processing; 3. Push to the message queue
  • Interface for uploading log files: 1. 2. Save the file

ELK is responsible for data collection, data sources

  • Data in RabbitMQ queues
  • Uploaded log files

The above is the general process

ELK installation

Installation is mainly based on this article to use Docker to build ELK environment, specific installation according to this, very convenient.

Configuration problem

This is where MOST of my time is wasted, especially with the logstash input configuration

Here are the configuration files in the project

Docker-comemage. yml is the configuration file of ELK three-piece suite. Later, because my back-end is also container-based deployment, the uploaded log file needs to be accessible to Logstash, so I added a volume

-type: bind source:./logstash/logs ### /usr/share/logstash/logs ### This is the address of the logstash containerCopy the code

Then there is the logstash pipeline configuration

The ### file input, path corresponds to the absolute path in the Logstash container, which is mapped to the host. Input {file {mode => "read" start_position => "beginning" path => ["/usr/share/logstash/logs/logs / * * "] codec = > json}} # # # the rabbitmq input input {the rabbitmq {host = > "172.20.10.2" port => 5672 user => "admin" password => "admin" queue => "log.handler" exchange => "scheduler-logs-exchange" exchange_type => "direct" key => "log.handler" heartbeat => 30 durable => true codec => json } } ## Add your filters / logstash plugins configuration here output { elasticsearch { hosts => ["elasticsearch:9200"] user => "your name" password => "your password" index => "your index" } }Copy the code

Name a few of the problems:

  1. The producer of RabbitMQ is the back-end Java code. Before Java pushes information to RabbitMQ, I initialize the exchange and queue in the code, which causes the logstash operation to fail. Give it to the Logstash, and it will create it automatically. A similar problem occurred when I changed exchange_type for RabbitMQ.
  1. Originally I configured two files for pipeline, one for RabbitMQ and one for files. The configuration also works, but every time I push a data to RabbitMQ, the logstash file generates two outputs. The reason is that there are two outputs in two configuration files. The mechanism should be that whenever an input is satisfied, the output will be executed twice. So I put the configuration back together; There is also a strange issue with the placement of the rabbitMQ and file input. I have tested that file must be on RabbitMQ or file will be invalid for some reason.
  2. Deployment problem, I have no problem in the local environment, but to the server, the file read always error, error prompt is file in an input field is illegal; Later, when I compared it with the local one, I found that the logstash container would automatically delete the log file when it was read correctly (because I configured start_position => “beginning”), but there was no log file on the server. Later, I found that the logstash container did not have the right permissions to operate the file. No delete permission. I had to go inside the container and change the permissions.
  3. After modifying the configuration file, remember to restart the service, restart the service, restart the service…

In the end, although it now seems to be working correctly, I have a feeling that there are a few problems with my approach

  • Logstash to change file permissions
  • Java service upload file directory and Logstash read log directory link

Hope you give some help, thank you