Abstract

Log collection and view is the key to solve online problems quickly and conveniently. Especially in the development of micro-service, it is very important to use a log center as the storage and search of logs. Generally, in micro-service, Elk is used to build, Elasticsearch is used to store query logs, and Logstash is used to collect logs. Visualize the log through Kibana. In this section we will focus on integrating ELK into the SpringBoot service to implement log collection.

content

A. Introduction of ELK

Elk is short for Elasticsearch+Logstash+Kibana

ElasticSearch is a distributed search and analysis engine that can be used for full-text search, structured search and analysis, and combines all three. Developed based on Lucene, Elasticsearch is now one of the most widely used open source search engines.

Logstash is simply a real-time pipe that transfers data from the input to the output of the pipe, while allowing you to add filters in the middle of the pipe. Logstash provides a number of powerful filters to fit your application scenarios.

Kibana is an open source analytics and visualization platform designed to be used with Elasticsearch. Kibana allows you to search, view, and interact with Elasticsearch data stored in the Elasticsearch index, using various ICONS, tables, maps, etc. Kibana makes it easy to display advanced data analysis and visualization.

II. Installation of ELK

Website, download and install: https://www.elastic.co/cn/dow… JDK1.8 will need to be installed ahead of time.

Logstash >= Elasticsearch > Kibana

Logstash

We according to the website operation: https://www.elastic.co/cn/dow…

Unpack the
# unzip tar-zxvf logstash 7.13.1-darwin-x86_64.tar.gz # unzip tar-zxvf logstash 7.13.1-darwin-x86_64 logstash # unzip tar-zxvf logstash 7.13.1-darwin-x86_64
Modifying configuration files

Go to the directory :config/logstash.yml

# modify http.host: 127.0.0.1
New configuration file

vim logstash.conf

Input {TCP {# select server mode => "server" # IP and port Destination host => "127.0.0.1" port => 4560 # json codec => json_lines}} filter {destination host => "127.0.0.1" port => } output {elasticsearch {action = bb0 "index" # hosts => "127.0.0.1:9200" # Index => "applog"}
Start the
 ./bin/logstash -f ./config/logstash.conf

access

http://127.0.0.1:9600/ Get the results:

{
    "host": "xiexinmingdeMacBook-Pro.local",
    "version": "7.13.2",
    "http_address": "127.0.0.1:9600",
    "id": "1adacb8f-14be-45a6-a851-f96a6143e992",
    "name": "xiexinmingdeMacBook-Pro.local",
    "ephemeral_id": "2f720f09-b984-4462-bb27-7256a3e962a7",
    "status": "green",
    "snapshot": false,
    "pipeline": {
        "workers": 16,
        "batch_size": 125,
        "batch_delay": 50
    },
    "build_date": "2021-06-10T19:51:49Z",
    "build_sha": "6d32f7df79a7d10d821b4cbff51c47f46d8c67b1",
    "build_snapshot": false
}

Elasticsearch

Unpack the
# elasticsearch-7.13.1/ CD elasticsearch-7.13.1/ CD elasticsearch-7.13.1/
Modify the configuration file config/elasticsearch.yml
Network. host: true IP: default 127.0.0.1 http.port: port: default 9200
Start the es
./bin/elasticsearch

If the JDK reports an error during startup:

Error: Future versions of Elasticsearch will require Java 11; Your Java version from [/ usr/local/NLP/Java/jdk1.8.0 _162 / jre] does not meet this requirement.

# JDK11 is the default after an Elastic to 7.0, but the default is JDK1.8. # Elastic to 7.0 is the default after an Elastic to 7.0. # Elastic to 7.0 is the default after an Elastic to 7.0. Using Elastic's own JDK # requires modifying the Elastic startup file
Vim bin/elasticsearch # configured for elasticsearch bring JDK export JAVA_HOME = / usr/local/NLP/elasticsearch - 7.13.1 / JDK export $JAVA_HOME/bin:$PATH =$JAVA_HOME/bin:$PATH = if [-x "$JAVA_HOME/bin/ Java "]; Then the JAVA = "/ usr/local/NLP/elasticsearch - 7.13.1 / JDK/bin/JAVA" else JAVA = ` which JAVA ` fi
access

http://127.0.0.1:9200/

Results obtained:

{ "name" : "xiexinmingdeMacBook-Pro.local", "cluster_name" : "elasticsearch", "cluster_uuid" : "3vwu7eyptx-yamrnkuq4tq ", "version" : {"number" : "7.13.1", "build_type" : "default", "build_type" : "tar", "build_hash" : "9a7758028e4ea59bcab41c12004603c5a7dd84a9", "build_date" : "2021-05-28T17:40:59.346932922Z", "build_snapshot" : false, "lucene_version" : "8.8.2", "MINIMUM_WIRE_COMPATIBILITY_VERSION" : "6.8.0", "MINIMUM_INDEX_COMPATIBILITY_VERSION" : "6.0.0-beta1"}, "tagline" : "You Know, for Search"}

Kibana

Unpack the
# unzip tar-zxvf kibana-7.13.1-linux-x86_64.tar.gz # switch directory CD kibana-7.13.1-linux-x86_64/
Modify the configuration file :config/logstash.yml.
HTTP. Host: "127.0.0.1" # Elasticsearch. url: http://127.0.0.1:9200
Start the
./bin/kibana

Log:

Log [17:47:12.207] [info][server][Kibana][HTTP] HTTP server running at http://localhost:5601
access

http://localhost:5601.

3. SpringBoot configuration

1. Pom. The XML configuration

<? The XML version = "1.0" encoding = "utf-8"? > < project XMLNS = "http://maven.apache.org/POM/4.0.0" XMLNS: xsi = "http://www.w3.org/2001/XMLSchema-instance" Xsi: schemaLocation = "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" > The < modelVersion > 4.0.0 < / modelVersion > < groupId > com. Gf < / groupId > < artifactId > springboot - elk < / artifactId > <version> 0.0.1-snapshot </version> </packaging> <name>springboot-elk</name> <description>Demo project for Spring Boot</description> <parent> <groupId>org.springframework.boot</groupId> The < artifactId > spring - the boot - starter - parent < / artifactId > < version > 2.1.1. RELEASE < / version > < relativePath / > <! -- lookup parent from repository --> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> < project. Reporting. OutputEncoding > utf-8 < / project. Reporting. OutputEncoding > < Java version > 1.8 < / Java version > </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <! -- logback --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> </dependency> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>5.2</version> </dependency> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <fork>true</fork> </configuration> </plugin> </plugins> </build> </project>

2.logback.xml

<? The XML version = "1.0" encoding = "utf-8"? > <! DOCTYPE configuration> <configuration> <include resource="org/springframework/boot/logging/logback/defaults.xml"/> <include resource="org/springframework/boot/logging/logback/console-appender.xml"/> <! - the name of the application - > < property name = "APP_NAME" value = "ccos - upms" / > <! > <property name="LOG_FILE_PATH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/logs}"/> <contextName>${APP_NAME}</contextName> <! - logging to FILE appender every day -- -- > < appender name = "FILE" class = "ch. Qos. Logback. Core. Rolling. RollingFileAppender" > < rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${LOG_FILE_PATH}/${APP_NAME}-%d{yyyy-MM-dd}.log</fileNamePattern> <maxHistory>30</maxHistory> </rollingPolicy> <encoder> <pattern>${FILE_LOG_PATTERN}</pattern> </encoder> </appender> <! - output to logstash appender - > < appender name = "logstash" class = "net. Logstash. Logback. Appender. LogstashTcpSocketAppender" > <! > <destination>127.0.0.1:4560</destination> <encoder charset=" utf-8" class="net.logstash.logback.encoder.LogstashEncoder"/> </appender> <root> <appender-ref ref="CONSOLE"/> <appender-ref ref="FILE"/> <appender-ref ref="LOGSTASH"/> </root> </configuration>

Test four.

After running the test case, go back to Kibana, Stack Management –> Index Patpatterns, and fill in the value of the Index in the Logstash configuration, here Applog.

Stack Management:

.

Index Patterns:

Create index pattern

.

Step 2 of 2: Configure settings:

Return to Discover,