Assume that in the previous case, we have implemented link tracing between service calls, but the logs are scattered on different machines. Even if there is a problem, we want to locate the problem quickly, we have to integrate the logs from different machines and then find the problem. In this case, you need to introduce a log analysis system, such as ELK, which can collect log information from multiple servers in a unified manner. When a fault occurs, you can easily search for the requested link information based on traceId.

I. Introduction to ELK

ELK consists of three components:

  • Elasticsearch is an open source distributed search engine. It features distributed search, zero configuration, automatic discovery, index sharding, index copy, restful interface, multiple data sources, and automatic search load.
  • Logstash is a completely open source tool that collects, analyzes, and stores logs for future use.
  • Kibana is an open source and free tool that provides a log analysis friendly Web interface for Logstash and ElasticSearch to aggregate, analyze, and search important data logs.

ELK website: www.elastic.co/cn/

2. Output JSON logs

You can use logback to output jSON-formatted logs that are collected by Logstash and stored in Elasticsearch and then viewed in Kibana. To enter data in Json format, add a dependency:

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder </artifactId>
    <version>4.8</version>
    <scope>runtime</scope>
</dependency>
Copy the code

Then create a logback-spring.xml file. The format of the data to be collected is as follows:

<! -- Appender to log to file in a JSON format --> <appender name="logstash"
        class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>$ {LOG FILE}.json</file>
        <rollingPolicy
            class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern>
        <maxHistory>7</maxHistory>
    </rollingPolicy>
    <encoder class = "net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
        <providers>
            <timestamp>
                <timeZone>UTC</timeZone>
            </timestamp>
            <pattern>
                <pattern>
                    {
                    "severity": "%level"."service": "${spr ingAppName:-}"."trace": "%X{X-B3-TraceId:-}"."span": "%X{X-B3-SpanId:-}"."parent": "%X{X-B3-ParentSpanId:-}"."exportable": "%X{X-Span-Export:-}"."pid": "${PID:-}"."thread": "%thread"."class": "%logger{40}"."rest": "&message "
                    }
                </pattern>
            </pattern>
        </providers>
    </encoder>
</appender>
Copy the code

For details, see: HTTPS :/github.com/spring-clou… sleuth-documentation-apps/blob/master/service1/src/main/resources/ logback-spring.xml

After integration, you can see a log file ending with.json in the output log directory. The data format in the log file is in json form, which can be directly collected through Logstash.

{
    "timestamp": "The 2017-11-30 T01: at 221 + 00:00"."severity" : "DEBUG" , 
    "service":" fsh-substitution"."trace": "41b5a575c26eeea1" ,
    "span": "41b5a575c2 6eeeal" ,
    "parent": "41b5a575c26eeea1"."exportable": " false"."pid": "12024"."thread": "hystrix-fsh-house- 10" ,
    "class": "c.f.a. client . fsh. house . HouseRemoteClient"."rest": "[HouseRemoteC1 ient#hosueInfo] <--- END HTTP ( 796-byte body ) "
}
Copy the code