preface

In the EFK infrastructure, we need to deploy Filebeat on the client to collect logs and upload them to the LogStash. After the log is parsed in the LogStash file, the log is transferred to ElasticSearch. Finally, the log is viewed through Kibana.

The EFK basic environment has been set up. In this paper, we use real cases to get through the data transmission among the three and solve some common problems in the use of EFK.

First take a look at the actual business log

2020-01-09 10:03:26/719 INFO ========GetCostCenter Start=============== 2020-01-09 10:03:44/267 WARN Cost center code less than 10 digits! {" deptId ":" D000004345 ", "companyCode" : "01"} 2020-01-09 10:22:37, 193 ERROR Java. Lang. An IllegalStateException: SessionImpl[abcpI7fK-WYnW4nzXrv7w,]: can't call getAttribute() when session is no longer valid. at com.caucho.server.session.SessionImpl.getAttribute(SessionImpl.java:283) at weaver.filter.PFixFilter.doFilter(PFixFilter.java:73) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:87) at weaver.filter.MonitorXFixIPFilter.doFilter(MonitorXFixIPFilter.java:30) at weaver.filter.MonitorForbiddenUrlFilter.doFilter(MonitorForbiddenUrlFilter.java:133)Copy the code

The log format is as follows: Time Log Level Log details The main task is to write this log into the EFK.

Filebeat installation and configuration

  • Download filebeat7.5.1
  • Upload the downloaded file to the server and decompress it

    The tar - ZXVF filebeat 7.5.1 - Linux - x86_64. Tar. Gz
  • Modify filebeat yml,
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /app/weaver/Resin/log/xxx.logCopy the code

This section configures log input and specifies the log storage path

Logstash: # The logstash hosts hosts: ["172.31.0.207:5044"]Copy the code

This section configures log output and specifies the Logstash storage path

  • Start the filebeat

    ./filebeat -e -c filebeat.yml

    If silent startup is required, usenohup ./filebeat -e -c filebeat.yml &Command to start

Logstash configuration

The Logstash configuration is divided into three parts: input, filter, and output. Input Specifies the input, mainly opening the port to Filebeat for receiving logs. Filter Specifies the filtering, parsing and filtering log content. Output is used to specify the output. You can directly configure the address of ES

Input {beats {port => 5044}} output {elasticSearch {hosts => ["http://172.31.0.127:9200"] index => "myindex-%{+YYYY.MM.dd}" user => "elastic" password => "xxxxxx" } }Copy the code

After configuring the logstash, run the following command to restart the logstash docker-comement-f elk.yml restart logstash

After the above two steps, the application logs the log file, and FileBeat writes the log file to the logstash file. The log written to Kibana is as follows:

The log shows two problems:

  • Because the error log stack information is multi-line, it is presented as multi-line in Kibana, and data viewing is messy. Stack exceptions need to be displayed on a single line.
  • Logs need to be parsed and displayed in the Time log level Log Details format.

upgrading

  • Set merge lines in FileBeat

    Filebeat’s default is line transmission, but our log must be multiple lines, so we need to find the log pattern to merge multiple lines together. For example, our log formats all start with a time format, so we’re in FileBeatfilebeat.inputsAdd the following lines to the area
Multiline. pattern: ^\d{4}-\d{1,2}-\d{1,2} # enable multiline merge. Negate: true # merge into previous line multiline.match: afterCopy the code

  • Set log resolution in the Logstash configuration to resolve the log into the “time log level log details” presentation format, so we need to add a Filter section to the Logstash configuration file
filter { grok{ match => { "message" => "(? <date>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}),\d{3} %{LOGLEVEL:loglevel} (? <des>.*)" } } }Copy the code

In this case, logs are parsed using grok syntax and filtered using regular expressions. You can use the Grok debugging tool in Kibana

After completing the configuration, we re-opened the Kibana Discover interface to check the log, which met the expectation and was perfect!

Q&A

Kibana garbled

The main reason is that there is a problem with the format of the log file on the client. You can check the encoding format of the log file through file xxx.log. If the encoding of ISO8859 is usually garbled, we can specify the encoding of the log file in the FileBeat configuration file.

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /app/weaver/Resin/log/xxx.log
  encoding: GB2312Copy the code

Kibana failed to extract the field

As shown above, this exception occurs when you open the Kibana Discover panel, just delete ES.kibana_1Index and then revisit Kibana.

View surrounding files

When we view logs at the terminal, we usually check context information for troubleshooting, such as frequently used commandscat xxx.log | grep -C50 keywordHow do you do this in Kibana?

Search for keywords in Kibana and find specific log records by clicking the left down arrow and then clicking “View documents around you”.

Dynamic index

Our log platform may need to connect to multiple business systems, and different indexes need to be established according to the business systems.

  • Tag the log in FileBeat
- type: log
  ......
  fields:
    logType: oabusinessCopy the code
  • The index is generated based on tags in the Logstash
input { beats { port => 5044 } } filter { if [fields][logType] == "oabusiness" { grok{ match => { "message" => "(? <date>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}),\d{3} %{LOGLEVEL:loglevel} (? > < des. *) "the output}}}} {elasticsearch {hosts = > (" http://172.31.0.207:9200") index = > "%{[fields][logType]}-%{+YYYY.MM.dd}" user => "elastic" password => "elastic" } }Copy the code

Well, friends, the content of this issue is over, you can see the students here are excellent students, the next promotion and salary is you! If you find this article helpful, please scan the qr code below and add a following.” Forward “plus”, form a good habit! See you next time!

Please scan the code to follow the wechat public account or personal blog