Why build a log system

When a fault occurs in the online environment, connect to the server in the production environment and run various log query commands such as tail, cat, sed, and grep to locate the fault. Microservices typically deploy multiple instances of a service, which requires searching for log files in each log directory. Each application instance usually sets a log scrolling policy (one log file per day by date). Such tedious work greatly reduces the efficiency of troubleshooting problems. Therefore, to solve this problem, it is necessary to have a set of intelligent log system, which can manage these logs and provide retrieval function, help engineers timely troubleshooting, solve these pain points. Thus was born the ELK logging system.

What is the ELK

ELK is an acronym for ElasticSearch, Logstash, and Kibana. ELK is an acronym for ElasticSearch, Logstash, and Kibana.

ElasticSearch

Elasticsearch is a distributed, RESTful search and data analysis engine that addresses a variety of emerging use cases. It is a full-text search engine based on Apache Lucene, written in the Java language. And as the core of ELK, centrally store data.

Logstash

Logstash is a free and open server-side data processing pipeline that captures data from multiple sources, transforms it, and sends it to a designated “repository.”

Kibana

With Kibana, you can visualize Elasticsearch and navigate through the Elastic Stack, making it easy to do everything from tracking query loads to understanding how requests flow through your application.

Installation and Startup

First of all, you need a server. This article takes the construction of a Linux server as an example. Elk requires a Java environment, which is installed by default.

Download address

All three elK products can be downloaded from the official website.

Website: www.elastic.co/cn/download…

Install the ElasticSearch

  1. Use wget command to download eg: wget artifacts. The elastic. The co/downloads/e…

  2. Tar -zxvf ElasticSearch-7.5.1-linux-x86_64.tar. gz

  3. Delete the installation file rm -rf ElasticSearch-7.5.1-linux-x86_64.tar. gz

Start the ElasticSearch

  1. Go to the es installation directory config folder CD ElasticSearch -7.5.1/config

  2. Edit the vim elasticSearch. yml configuration file to change the following information:

    cluster.name: dh-elasticsearch  # cluster name
    node.name: node-1 # node name
    path.data: The/opt/elasticsearch - 7.5.1 / data  Where data is stored
    path.logs: The/opt/elasticsearch - 7.5.1 / logs  Path for storing logs
    network.host: 0.0. 0. 0 # Monitor network address
    http.port: 9200 # port
    http.cors.enabled: true # cross-domain
    http.cors.allow-origin: "*" 
    Copy the code

    One thing to note here is that root cannot start Elasticseach directly. You need to create a new user and start es with the new user.

    1. Linux create user adduser yezi
    2. Passwd Yezi password (need to be entered twice)
    3. Grant permission to user yezi chown -r yezi /opt/ elasticSearch -7.5.1
    4. Switch to the new user and run the startup command
  3. / elasticSearch -d (-d indicates background startup)

Visit the ElasticSearch

Input http://192.168.18.244:9200/ in your browser

The following information is obtained:

The above indicates that the single-server ES is set up in Linux

Install Kibana

  1. Use wget command to download eg: wget artifacts. The elastic. The co/downloads/k…

  2. After downloading, unpack tar -zxvf kibana-7.5.1-linux-x86_64.tar.gz

  3. Delete the installation file rm -rf Kibana-7.5.1-linux-x86_64.tar. gz

Start the Kibana

  1. Go to the Kibana configuration folder CD Kibana-7.5.1 /config/

  2. Edit the Kibana. yml configuration file vim Kibana. yml Enter the following information

    server.port: 5601 # port
    server.host: "0.0.0.0" Allow all external access
    server.name: "yezi" # service name
    elasticsearch.hosts: [" http://192.168.18.244:9200 "] # es instance used for all queries
    i18n.locale: "zh-CN" The interface language is Displayed in Chinese
    Copy the code
  3. CD Go to the bin directory and run sh kibana &

Visit Kibana

Input http://192.168.18.244:5601 in your browser

The following information is obtained:

The above Kibana was successfully launched

Install the Logstash

  1. Use wget command to download eg: wget artifacts. The elastic. The co/downloads/l…

  2. After downloading, unpack tar -zxvf logstash-7.5.1.tar.gz

  3. Delete the installation file rm -rf logstash-7.5.1.tar.gz

Start the Logstash

  1. Access the logstash configuration file directory CD logstash-7.5.1/config/

  2. Edit the logstash. Yml configuration file vim logstash

    node.name: yezi    Set the node name, usually write hostname
    path.data: /usr/local/logstash/plugin-data    Create logStash and persistence directories for the plugin
    config.reload.automatic: true    Enable automatic loading of configuration files
    config.reload.interval: 10    Define the configuration file reload time period
    http.host: "127.0.0.1"    Define the access host name, usually a domain name or IP address
    Copy the code
  3. Go to the bin directory and create the vim logstashyezi.conf file

    input {
        file {
            path => "/home/denghua/mysql/yezi.log"
            type= >"yezim"
            start_position => beginning
        }
    }
    filter {
    
    }
    output {
        elasticsearch {
        hosts => "localhost:9200"
        index => "es-message-%{+YYYY.MM.dd}"
        }
        stdout{codec => rubydebug}
    }
    Copy the code

    Input: data source. This document uses a newly created log file as an example

    • Path: indicates the path of the source log file
    • Type: indicates the log type
    • Start_position: data read point, usually from the start

    Out: Data flow, this article is mainly constructed by ELK, so data is sent to ElasticSearch

    • Hosts: indicates the ES address
    • Index: indicates the target index
    • Stdout: standard input
  4. To enable the logstash command, run sh logstash -f logstashyezi.config &

    • If prompted — pathData problem

    • Sh logstash -f logstashyezi.config –path.data=/home/denghua/mysql &

    • Pathdata can be specified by itself

  5. Enter the log directory and run the tail command to view logs. The following information is displayed:

The logstash startup is successful!

If the yezi.log file is written, you can see the logstash data pushed to ElasticSearch on Kibana.

Let’s put it to the test

test

Since our log source is the yezi.log file we created ourselves, we chose to simulate the input test manually

  1. Enter Kibana and select Discover, the first button on the left toolbar, as shown below

  2. Select * from logstash where index = es-message-%{+ YYYy.mm. Dd} where index = es-message*

  3. You can filter the time format on the upper right. For example, the current time is from 15 minutes ago to now

  4. Go to the logstash directory CD /home/denghua/mysql/

  5. To manually simulate logs, enter echo “Simulate Input” > yezi.log

  6. Then go back to the Kibana panel in step 1 and click the refresh button in the upper right corner to get the following

    See here that the log “simulated input” that we manually simulated has been displayed on the page.

    At this point, the standalone version of ELK has been built.

    In addition, the middle filter panel can be searched according to fields, which can be explored separately.