Use Docker to build ELK log system

After installing ELK with the local version, I stopped working with it. The end of the year is not so busy, the heart has been thinking about, so recently began to toss again. Go to Elastic’s website, and sure enough, version is version, once a week. The current version I’m using is based on version 6.1.1.

The target

  1. Collect Java log files and classify them by file, such as order logs, customer logs, etc.
  2. Log file multi-line processing

Overall architecture diagram

Prepare the mirror

After 6.0, the official image maintenance version is www.docker.elastic.co/. Find the desired ELK mirror address and pull it down. After the official pull came down, the image name was too long, so I re-tagged all the images, and command: Docker tag docker. Elastic. Co/elasticsearch/elasticsearch: 6.1.1 elasticsearch: the latest. Use Docker Images to view:

Install ElasticSearch of docker version

Max_map_count must be set to 262144 in production. There are two ways to set this up

  1. Permanent modification, in/etc/sysctl.confAdd a line to the file:
grep vm.max_map_count /etc/sysctl.conf Find the current value.

vm.max_map_count=262144 # Modify or add

Copy the code
  1. Running machines:
sysctl -w vm.max_map_count=262144

Copy the code

Then we execute the command to expose port 9200 and port 9300 of the container, so that we can do es indexing on other sets through the head plugin. Run the following command:

Docker run -p 127.0.0.1:9200:9200 -p 9300:9300 --name ElasticSearch-e "discovery.type=single-node" elasticsearch

Copy the code

In actual use, you may need to set the cluster. It depends on the actual situation. If you need to store historical data, you may need to save the data directory locally, using -v, or mount a local directory.

Install docker version Kibana

The main purpose of Kibana is to help us visualize log files. Easy for us to operate, statistics and so on. It requires ES service, so we associate the deployed ES with Kibana, using the main parameter –link:

docker run -d -p 5601:5601 --link elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana

Copy the code

Using the link parameter will add the ELASticSearch IP address to the Kibana container hosts file, so we can access the ES service directly through the name defined.

Install logStash and FileBeat

The previous Installation of Kibana and ES does not require much attention to their detailed configuration if we are in a development environment. However, we need to pay attention to the configuration of logstash and FileBeat, because these two are important points for us to complete the requirements.

Logstash we just log it and export it to ElasticSearch when it’s done.

Filebeat is a lightweight collector that we use to collect Java logs, tag logs under different folders, handle multi-line logging behavior (mainly for Java exception information), and then send it to LogStash.

The LOG file format is roughly: DATE log-level log-message. The format is defined in log4j.properties. You can also define your own output format.

Now we define logstash. Conf using the Grok filter plugin primarily in logstash.

logstash.conf:

input {
  beats {
    host => "localhost"
    port => "5043"
  }
}
filter {
   if [fields][doc_type] == 'order' {
    grok {
			match => { "message"= >"%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{JAVALOGMESSAGE:msg}"}}}if [fields][doc_type] == 'customer' { # write two identical groks, in fact, there may be many different log format, here is a hint, of course, if the same format, do not write here
    grok {
			match => { "message"= >"%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{JAVALOGMESSAGE:msg}" }
		}
   }
}

output {
  stdout { codec => rubydebug }
  elasticsearch {
        hosts => [ "localhost:9200" ]
        index => "%{[fields][doc_type]}-%{+YYYY.MM.dd}"}}Copy the code

In logstash. Conf, we mainly use [fields][doc_type] to indicate the log type, which is defined in FileBeat.

Now we assume that we need to collect log files in two directories: /home/user/elk/customer/*. Log and /home/user/elk/order/*. Log:

customer.log:

2017-12-26 10:05:56,476 INFO ConfigClusterResolver: 43-considerations Eureka EndPoints via Configuration 2017-12-26 10:07:23,529 INFO WarehouseController: 271-findWarehouselist,json{"formJSON": {"userId":"885769620971720708"},"requestParameterMap": {},"requestAttrMap": {"name":"asdf"."user":"8857696"."ip":"183.63.112.1"."source":"asdfa"."customerId":"885768861337128965"."IMEI":"863267033748196"."sessionId":"xm1cile2bcmb15wtqmjno7tgz"."sfUSCSsadDDD":"Asdf / 10069 & ADR & 1080 & 1920 & OPPO R9s Plus&Android 6.0.1." "."URI":"/warehouse-service/appWarehouse/findByCustomerId.apec"."encryptType":"2"."requestStartTime": 34506714681405}} 2017-12-26 10:07:23,650 INFO WarehouseServiceImpl: 325-warehouse list:8,warehouse STR :[{"addressDetail":"nnnnnnnn"."areaId":"210624"."areaNa":""}] ConfigClusterResolver: 43-considerations Eureka endPoints via configuration 2017-12-26 ConfigClusterResolver: 43-considerations Eureka EndPoints via Configuration 2017-12-26 10:20:56,478 INFO ConfigClusterResolver: 43-considerations Eureka EndPoints via Configuration 2017-12-26 10:05:56,476 INFO ConfigClusterResolver: 43-considerations Eureka EndPoints via Configuration 2017-12-26 10:07:23,529 INFO WarehouseController:271 - findWarehouseList,json{"formJSON": {"userId":"885769620971720708"}}] ConfigClusterResolver: 43-considerations Eureka endPoints via configuration 2017-12-26 ConfigClusterResolver: 43-considerations Eureka EndPoints via Configuration 2017-12-26 10:20:56,478 INFO ConfigClusterResolver:43 - Resolving eureka endpoints via configurationCopy the code

order.log:

2017-12-26 11:38:20, 374 INFO WebLogAspect:53 -- Request :18,SPEND TIME:0 2017-12-26 11:38:20,404 INFO NoticeServiceApplication:664 -- The following profiles are active:testThe 2017-12-26 11:41:07, 754 INFO NoticeServiceApplication: 664 - The following profiles are active:test2017-12-26 12:38:58,683 INFO RedisClusterConfig:107 -- //// -- Start single point Redis -- 2017-12-26 12:39:00,325 DEBUG ApplicationContextRegister: 26-2017-12-26 12:39:06, 961 INFO NoticeServiceApplication: 57 - Started NoticeServiceApplicationin17.667 seconds (JVM is runningfor18.377) 2017-12-26 11:27:56,577 INFO WebLogAspect:51 -- Request :19,RESPONSE:"{\"data\":null,\"errorCode\":\"\",\"errorMsg\":\"\",\"repeatAct\":\"\",\"succeed\":true}"2017-12-26 11:28:09,829 INFO WebLogAspect:53 -- SPEND TIME:1 2017-12-26 11:28:09,829 INFO WebLogAspect:42 -- Request: 20, URL: http://192.168.7.203:30004/sr/flushCache the 11:28:09 2017-12-26, 830 INFO WebLogAspect: 43 - Request :20,HTTP_METHOD:POST 2017-12-26 11:28:09.830 INFO WebLogAspect:44 -- Request :20,IP:192.168.7.98 2017-12-26 11:28:09.830 INFO WebLogAspect: 45 - request: 20, CLASS_METHOD: com. Notice. Web. EstrictController 11:28:09 2017-12-26, 830 INFO WebLogAspect: 46 -- request :20,METHOD:flushRestrict 2017-12-26 11:28:09,830 INFO WebLogAspect:"{\n}"] the 11:28:09 2017-12-26, 830, the DEBUG SystemRestrictController: 231-2017-12-26 11:38:20, set permissions chain 404 INFO NoticeServiceApplication:664 -- The following profiles are active:testThe 2017-12-26 11:41:07, 754 INFO NoticeServiceApplication: 664 - The following profiles are active:testThe 2017-12-26 11:41:40, 664 INFO NoticeServiceApplication: 664 - The following profiles are active:testThe 2017-12-26 11:43:38, 224 INFO NoticeServiceApplication: 664 - The following profiles are active:testThe 2017-12-26 11:47:49, 141 INFO NoticeServiceApplication: 664 - The following profiles are active:testThe 2017-12-26 11:51:02, 525 INFO NoticeServiceApplication: 664 - The following profiles are active:testThe 2017-12-26 11:52:28, 726 INFO NoticeServiceApplication: 664 - The following profiles are active:testThe 2017-12-26 11:53:55, 301 INFO NoticeServiceApplication: 664 - The following profiles are active:testThe 2017-12-26 11:54:26, 717 INFO NoticeServiceApplication: 664 - The following profiles are active:testThe 2017-12-26 11:58:48, 834 INFO NoticeServiceApplication: 664 - The following profiles are active:testThe 2017-12-26 12:38:51, 126 INFO NoticeServiceApplication: 664 - The following profiles are active:test2017-12-26 12:38:58,683 INFO RedisClusterConfig:107 -- //// -- Start single point Redis -- 2017-12-26 12:39:00,325 DEBUG ApplicationContextRegister:26 -- ApplicationContextRegister.setApplicationContext:applicationContextorg.springframework.boot.context.embedded.AnnotationC onfigEmbeddedWebApplicationContext@5f150435: startup date [Tue Dec 26 12:38:51 CST 2017]; parent: Org. Springframework. Context. The annotation. AnnotationConfigApplicationContext @ 63 c12fb0 12:39:06 2017-12-26, 961 INFO NoticeServiceApplication:57 -- Started NoticeServiceApplicationin17.667 seconds (JVM is runningfor 18.377)

Copy the code

The LOG file format is basically: DATE log-level log-message. The format is defined in log4j.properties. You can define it yourself, you can customize it and just change the grok in the logstash.

Then we solve the problems that fileBeat solves: collecting logs, handling multi-line logs, and tagging logs. In filebeat.yml, it is defined as follows:

filebeat.yml

filebeat.prospectors:
- paths:
    - /home/user/elk/logs/order/*.log
  multiline:
      pattern: ^\d{4}
      negate: true
      match: after
  fields:
    doc_type: order
- paths:
    - /home/user/elk/logs/customer/*.log
  multiline:
      pattern: ^\d{4}
      negate: true
      match: after
  fields:
    doc_type: customer
output.logstash: # output address
  hosts: ["logstash:5043"]

Copy the code
  1. Log collection: Locate and process log files directly using Prospector.
  2. Multi-line log: according to the log format, we start with YYYY, similar to the pure 4 digits, so we use the multile plugin and do the configuration. The official documentation is quite detailed, mainly in practice: FileBeat Multiline
  3. Tagging: This is the most important. The main purpose is to let LogStash know what type of message fileBeat is sending to it, and then when logStash sends to es, we can index it. Here fields is built-in and doc_type is custom.

Document_type was deprecated in 5.5.0. www.elastic.co/guide/en/be…

With that in mind, let’s launch our LogStash and FileBeat.

Start the Docker version of the logstash:

docker run --rm -it --name logstash --link elasticsearch -d -v ~/elk/yaml/logstash.conf:/usr/share/logstash/pipeline/logstash.conf logstash

Copy the code

Start FileBeat and mount the file to the container. There are other ways to do this, too, depending on your needs.

docker run --name filebeat -d --link logstash -v ~/elk/yaml/filebeat.yml:/usr/share/filebeat/filebeat.yml -v ~/elk/logs/:/home/logs/ filebeat

Copy the code

Finally, remember that when creating an index in Kibana, we use logstash by default, while we use a custom doc_type, so you need to enter order*,customer* to create two indexes.

Then you can see your configuration in Kibana Discovery

If you directly use my log, please change the time slightly, and change December 26, 2017 to the experimental date of the same day.

Note that the -v parameter is mounted to several local addresses. And addresses collected by FileBeat.

Configuration file address warehouse: use Docker to build ELK log system

Build log center based on ELK+Filebeat