Why use ELK to collect logs

At present, most projects adopt microservice architecture. In the early stage of the project, in order to go online as planned, no log collection and analysis platform was built. Logs were stored locally on the server and searched one by one. With the increasing number of services in the project, each service is deployed in a cluster and the number of servers grows rapidly, many problems are exposed at this time:

  • You need to log in to multiple servers to query logs of one service.
  • It is difficult to concatenate logs. There are multiple nodes in a process, so it is a lot of work to concatenate logs of the whole process.
  • Operation and maintenance (O&M) management is difficult. Not every colleague has the permission to log in to the server to view logs, but needs to troubleshoot problems based on logs. Therefore, a colleague with the permission needs to download logs and send them to a responsible colleague.
  • The system is difficult to give early warning. If the service is abnormal, the system notifies the corresponding person in charge in time

In the later stage, logHub on ant Financial cloud was adopted to collect and store logs uniformly. Since LogHub is not open source, the exact implementation of LogHub is not clear. Elasticsearch + LogStash + Kibana ELK (ElasticSearch + LogStash + Kibana) is used to collect logs. In fact, the mechanism is similar to logHub.

The summary of ELK

ELK is an acronym for elasticSearch, LogStash, and Kibana

  • Elasticsearch is an open source distributed search engine that collects, analyzes, and stores data. Its features are: distributed, zero configuration, automatic discovery, index automatic sharding, index copy mechanism, restful interface, multi-data sources, automatic search load and so on
  • Logstash is a tool for collecting, analyzing, and filtering logs. It supports a large number of data acquisition methods. The client is installed on the host that needs to collect logs. The server filters and modifies the received node logs and sends them to ElasticSearch at the same time.
  • Kibana provides a visual Web interface for Logstash and ElasticSearch using reports and graphical data to help aggregate, analyze, and search important data logs.

Implement log collection schemes

Elasticsearch -> Kibana

Advantages: This architecture is simple to build and easy to use

Disadvantages:

  • 1. Logstash is deployed on each node, occupying CPU and large memory during operation, which will have a certain impact on node performance
  • 2. Log data is not cached and may be lost

Kafka -> ElasticSearch -> Kibana

This explanation chooses the first scheme, and the second scheme will be implemented later

Docker Compose is used to build the ELK environment

Docker image needs to be downloaded in advance. I choose version 6.4.0 for ElasticSearch, Logstash and Kibana. The best version should be the same

Elasticsearch :6.4.0 Docker Pull LogStash :6.4.0 Docker Pull Kibana :6.4.0Copy the code

Create a local directory for your files

Create elasticSearch and Logstash directories to store the configuration files

Create the logstash configuration file logstash. Conf and upload it to the logstash directory

input {
  tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 4560
    codec => json_lines
  }
}
output {
  elasticsearch {
    hosts => "es:9200"
    index => "springboot-%{+YYYY.MM.dd}"}}Copy the code

Start the ELK service with the docker-comemage. yml script

Docker-comemage. yml contains the following contents:

version: '3'Services: ElasticSearch: image: ElasticSearch :6.4.0 container_name: ElasticSearch Environment: -"cluster.name=elasticsearch" Set the cluster name to elasticSearch
      - "discovery.type=single-node" Start in single-node mode
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # set the JVM memory size to be used
    volumes:
      - /Users/storage/software/docker/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins The plugin file is mounted
      - /Users/storage/software/docker/elk/elasticsearch/data:/usr/share/elasticsearch/data Mount data filesPorts: -9200:9200-9300 :9300 KIbana: Image: Kibana :6.4.0 Container_name: Kibana Links: -ElasticSearch :esYou can use the es domain name to access elasticSearch
    depends_on:
      - elasticsearch # Kibana will start after ElasticSearch has started
    environment:
      - "elasticsearch.hosts=http://es:9200" Set the address to access ElasticSearchPorts: -5601:5601 logstash: image: logstash:6.4.0 container_name: logstash volumes: - /Users/storage/software/docker/elk/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.confMount the logstash configuration file
    depends_on:
      - elasticsearch # Kibana will start after ElasticSearch has started
    links:
      - elasticsearch:es You can use the es domain name to access elasticSearch
    ports:
      - 4560:4560

Copy the code

Run the docker-compose command in the docker-compose directory

docker-compose up -d
Copy the code

The startup time may be a bit long and you need to be patient

Install the jSON_lines plug-in in the Logstash

Enter the logstash container (e9c845C8d48e is the container ID)
docker exec -it e9c845c8d48e /bin/bash
Go to the bin directory
cd /bin/
Install plugins
logstash-plugin install logstash-codec-json_lines
# Exit container
exit
Restart the logstash service
docker restart logstash

Copy the code

Visit http://127.0.0.1:9200/

Visit http://127.0.0.1:5601

This is how ElasticSearch and Kibana started successfully

Springboot integration logstash

Add a Logstash -logback-encoder dependency to POM.xml

<! Logstash --> <dependency> <groupId>net.logstash. Logback </groupId> <artifactId>logstash-logback-encoder</artifactId> The < version > 5.3 < / version > < / dependency >Copy the code

Add the logback-spring. XML configuration file to output logback logs to logstash

<! Output to logstash appender--> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <! -- Accessible logstash log collection port --> <destination>127.0.0.1:4560</destination> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    
     <springProfile name="dev">
        <root>
            <level value="INFO"/>
            <appender-ref ref="stdout"/>
            <appender-ref ref="asyncInfo"/>
            <appender-ref ref="asyncWarn"/>
            <appender-ref ref="asyncError"/>
            <appender-ref ref="LOGSTASH"/>
        </root>
    </springProfile>

    <springProfile name="test,prod">
        <root>
            <level value="INFO"/>
            <appender-ref ref="asyncInfo"/>
            <appender-ref ref="asyncWarn"/>
            <appender-ref ref="asyncError"/>
             <appender-ref ref="LOGSTASH"/>
        </root>
    </springProfile>
Copy the code

View log information in Kibana

Create index pattern

View collected logs

Start our project and see that the startup log has been exported to ElasticSearch

conclusion

After setting up the ELK log system, we can directly view the system log on Kibana, and can also search