With the increasing number of projects and servers, the company decided to use ELK (Elasticsearch+Logstash+Kibana) log analysis platform for micro-service log analysis.

1.ELK overall plan

1.1 ELK architecture diagram

1.2 ELK workflow

1. Deploy the Logstash on the microservice server as the Shipper to collect the microservice log file data and output the collected data to Redis message queue.

2. Deploy Logstash on another server as Indexer to read data from Redis message queues (which can be processed) and output to elasticSearch-Master.

3. Elasticsearch-master Synchronizes data between the primary node and the secondary node. (An odd number of Elasticsearch clusters is recommended.)

4.Kibana is deployed on a server to read Elasticsearch cluster data and display the Web query page for data display.

2. Message queue selection

2.1 Redis

In my final scheme, I chose Redis as message queue buffer to reduce Elasticsearch pressure and reduce peak load. The main reason was that the company considered cost and log collection was only used for our single project team, so I chose the existing Redis cluster of the company for reuse.

2.2 Kafka

In the initial solution, Kafka was chosen as a message queue. After all, Kafka is a message queue by nature.

3. The installation

Here will not be written here, provide three addresses for reference only:

Install Elasticsearch for Linux

4. The Logstash configuration

4.1 log2redis

Read redis from log file

Read data from a log file
#file{}
#type Log type
#path Log location
# can read the file directly (a.log)
*.log (*.log)
Read all files in folder (path)
#start_position Position to start reading files (beginning)
#sincedb_path from where sincedb_path is read (set to /dev/null to automatically read from the start position)
input {
     file {
        type= >"log"
        path => ["/root/logs/info.log"]
        start_position => "beginning"
        sincedb_path => "/dev/null"}}# Separate logs by timestamp
#grok differentiates the fields in the log
filter {
     multiline {
       pattern => "^%{TIMESTAMP_ISO8601} "
       negate => true
       what => previous
     }
     Define the format of the data
     grok {
       match => { "message"= >"%{DATA:datetime} - %{DATA:logLevel} - %{DATA:serviceName} - %{DATA:ip} - %{DATA:pid} - %{DATA:thread} - %{DATA-msg}"}}}Output data to Redis
#host Redis Host address
# port Redis port
#db Redis database number
#data_type Redis data type
# key Redis key
# password Redis password
output {
    redis {
        host => "ip"
        port => "6379"
        db => "6"
        data_type => "list"
        password => "password"
        key => "test_log"}}Copy the code

4.2 redis2es

Read es from Redis

Read data from redis
#host Redis Host IP address
# port Redis port
#data_type Redis data type
#batch_count
# password Redis password
#key Redis reads the key
input {
    redis {
        host => "ip"
        port => "6379"
        db => "6"
        data_type => "list"
        password => "password"
        key => "test_log"}}# Data output we point to the ES cluster
#hosts Elasticsearch Host address
#index name of Elasticsearch index
output {
  elasticsearch {
        hosts => "ip:9200"
        index => "logs-%{+YYYY.MM.dd}"}}Copy the code

Five other

The rest is Es cluster and Kibana, these two are not particularly noteworthy places, just search online, a lot of articles.

The above only represents the use of my project, may not be perfect for all scenarios, only for reference.