Since the amount of system logs is still controllable, ELK+Beats is selected and message queues are not introduced. Of course, the system can be upgraded in the future. For this reason, you only need to deploy the Elasticsearch and Logstash clusters on the logging platform and the Filebeat on the application server.

Preparations before Installation

JAVA environment

ELK requires JAVA 8 or higher. If it is not installed, perform the following steps to install it:

# check if $RPM installation - qa | grep Java # batch uninstall $RPM - qa | grep Java | xargs RPM -e -- nodeps $yum install -- y Java -- 1.8.0 comes with its * $Java -version openJDK version "1.8.0_151"Copy the code

Configure environment variables in /etc/profile:

# point to the installation directory, JAVA_HOME= /usr/lib/jvm/java-1.8.0-openJDK-1.8.0.151-1.b12.el6_9.x86_64 PATH=$JAVA_HOME/bin:$PATH CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar JAVACMD=/usr/bin/java export JAVA_HOME JAVACMD CLASSPATH PATHCopy the code

Run source /etc/profile for the configuration environment to take effect.

Install the GPG KEY

Install the gPG-key with yum:

$ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Copy the code

The yum command will install the latest version. To install an older version, use the
The official addressDownload the RPM package of an earlier version and use it
rpm -ivhCommand installation.

Elasticsearch

The installation

Download the latest version from the official address, and then unzip:

$wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.1.tar.gz $mkdir -p/usr/local/elk $tar ZXVF elasticsearch - 6.1.1. Tar. Gz - C/usr/local / $mv/usr/local/elk elk/elasticsearch - 6.1.1 / usr/local/elk/elasticsearchCopy the code

Before startup, modify the JVM size in the jvm.options configuration file. Otherwise, memory overflow may occur, causing startup failures.

$vim config/jvm.options # modify -xms128m -xmx256mCopy the code

Create an elk user because the new version of Elasticsearch does not allow you to start as root. You can use service to manage Elasticsearch and modify the startup user and installation directory.

$ useradd elk
$ chown -R elk:elk /usr/local/elk/elasticsearch

$ vim /etc/init.d/elasticsearch
ES_USER="elk"
ES_GROUP="elk"
ES_HOME="/usr/local/elk/elasticsearch"
MAX_OPEN_FILES=65536
MAX_MAP_COUNT=262144
LOG_DIR="$ES_HOME/logs"
DATA_DIR="$ES_HOME/data"
Copy the code

Enable Elasticsearch on port 9200 by default.

# open service $the chkconfig -- add elasticsearch $the chkconfig elasticsearch on $service elasticsearch start $netstat - tunpl | Grep "9200" TCP 00 127.0.0.1:9200 0.0.0.0:* LISTEN 27029/ Java # $curl http://127.0.0.1:9200Copy the code

Finally, install the plugins used:

$CD/usr/local/elk/elasticsearch # ingest - geoip and ingest the user-agent IP resolution plug-in and agent respectively resolution plug-in $bin/elasticsearch - plugin install Ingest-geoip $bin/elasticsearch-plugin install ingest-user-agent $bin/elasticsearch-plugin install ingest-user-agent X-pack # Change user password $bin/x-pack/setup-passwords interactiveCopy the code

After the X-Pack plug-in is installed, you need to authorize all operations on Elasticsearch. The default user name is elastic, and the default password is Changeme.

Kibana

First, create a yum source file named Kibana.repo in /etc/yom.repos. D:

[kibana-5.x]
name=Kibana repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Copy the code

Install using yum command:

$ yum install -y kibana
$ mkdir -p /usr/local/elk
$ ln -s /usr/share/kibana /usr/local/elk/kibana
$ cd /usr/local/elk/kibana
Copy the code

Modify the following configuration items in the kibana.yml configuration file:

$ mkdir -p /usr/local/elk/kibana/config $ mv /etc/kibana/kibana.yml /usr/local/elk/kibana/config $ vim config/kibana.yml Server. name: "elk.fanhaobai.com" # elasticSearch.url: "http://127.0.0.1:9200" # es kibana. Index: ".kibana" # elasticsearch.username: "Elastic" # username elasticsearch.password: "changeme" # passwordCopy the code

Install common plug-ins, such as X-pack:

$ bin/kibana-plugin install x-pack
Copy the code

By default, the Kibana runtime NodeJs will allocate a maximum of 1GB of memory. You can increase the max-old-space-size parameter at startup to limit the size of the running memory:

$vim bin/kibana # add --max-old-space-size=140 parameter NODE_ENV=production exec "${NODE}" $NODE_OPTIONS --max-old-space-size=140 --no-warnings "${DIR}/src/cli" ${@}Copy the code

Modify init startup script and start Kibana:

$ vim /etc/init.d/kibana home=/usr/share/kibana program=$home/bin/kibana args=-c\\\ $home/config/kibana.yml # $chown -r kibana: $chkconfig --add kibana $chkconfig kibana on $service kibana startCopy the code

After configuring the Web service, visit elk.fanhaobai.com to see Kibana’s powerful and gorgeous interface.

After installing the X-Pack plugin, access to Kibana also requires authorization, and any Elasticsearch username and password pair can be authenticated.

Logstash

The installation

First, create the logstash. Repo file in /etc/yom.repos.d:

[logstash-5.x] name=Elastic repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1  gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-mdCopy the code

Install Logstash using yum and test:

/usr/local/elk $ln -s /usr/share/logstash 5. X $yum install -y logstash /usr/local/elk/logstash $CD /usr/local/elk/logstash -e 'input {stdin {}} output {stdout {} }' The stdin plugin is now waiting for input: ELK 2017-11-21T22:25:07.264z FHB elkCopy the code

Modify the configuration file path:

$mv/etc/logstash/usr/local/elk/logstash config # $chown modifying file permissions - R elk: elk/usr/local/elk/logstashCopy the code

Modify the JVM memory size to prevent memory overflow exceptions:

$vim config/jvm.options # modify -xms80m -xmx150m as requiredCopy the code

Generate and modify init startup script:

$ bin/system-install /etc/logstash/startup.options sysv $ vim /etc/init.d/logstash home=/usr/share/logstash Name =logstash program=$home/bin/logstash args=--path. Settings \ $home/config user="elk" group="elk" # Add start $chkconfig --add logstash $ chkconfig logstash onCopy the code

Install x-Pack plug-in, basic status information monitoring:

$ bin/logstash-plugin install x-pack
Copy the code

configuration

Master profile

The main Logstash configuration file is config/ Logstash. Yml. The configuration is as follows:

Path. data: /var/lib/logstash path.logs: /usr/share/logstash/logs The/usr/share/logstash config/conf. # d elasticsearch user name and password xpack. Monitoring. Elasticsearch. Username: elastic xpack.monitoring.elasticsearch.password: changemeCopy the code

Configuration pipeline

Inputs → filters → outputs create a simple pipe as conf.d/filebeat.conf. After filtering logs, you need to configure the user name and password for Elasticsearch in output.

input { beats { port => 5044 } } filter { if [fileset][name] =~ "access" { grok { match => {"message" => "%{COMBINEDAPACHELOG}"} } date { match => ["timestamp", "dd/MMM/YYYY:H:m:s Z"] } } else if [fileset][name] =~ "error" { } else { drop {} } } output { elasticsearch { hosts => "Localhost :9200" Manage_template => false index => "%{[@metadata][type]}-%{+YYYY.MM}" "%{[fields][env]}" # document type user => "elastic" # username password => "changeme" # password}}Copy the code

See the Configuration Examples section for complete configurations and the Logstash Configuration Examples for more configurations.

Start the

# $service logstash start finish listening $netstat - TNPL | grep TCP 5044 0 0 0.0.0.0:5044 0.0.0.0: * LISTEN 10132 / JavaCopy the code

Beats

Filebeat

The installation

Install Elasticsearch using the same source as Elasticsearch.

Filebeat 5.6.6 $yum install -y filebeat $mkdir -p /usr/local/elk/beats $ln -s /usr/share/filebeat /usr/local/elk/beats/filebeat $ cd /usr/local/elk/beats/filebeatCopy the code

Modify init startup script:

$ vim /etc/init.d/filebeat

home=/usr/share/filebeat
pidfile=${PIDFILE-/var/run/filebeat.pid}
agent=${BEATS_AGENT-$home/bin/filebeat}
args="-c $home/filebeat.yml -path.home $home -path.config $home -path.data $home/data -path.logs $home/logs"
Copy the code

Configure the startup service:

$ chkconfig --add filebeat
$ chkconfig filebeat on
Copy the code

configuration

Create Filebeat configuration file filebeat.yml, enable nginx log module to collect access log information:

filebeat.modules: - module: nginx access: enabled: true var.paths: ["/data/logs/fanhaobai.com.access.log "] # log path prospector: the document_type: nginx - WWW - access # Logstash type field error: enabled: true var.paths: ["/data/logs/error.log"] prospector: document_type: nginx-all-error fields: Queue_size: 1000 Bulk_queue_size: 0 output. Logstash: # Output to Logstash enabled: true hosts: ["localhost:5044"] worker: 1 loadbalance: true index: 'filebeat'Copy the code

Start the

Log $# $service filebeat start viewing push tailf/usr/local/elk/beats/filebeat/bin/logs/filebeat T02:2017-12-22 00:53 + 08:00 INFO  Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 libbeat.logstash.publish.read_bytes=6 libbeat.logstash.publish.write_bytes=460Copy the code

After Filebeat is started, it detects whether the content of files to be collected is increased or updated and pushes data to Logstash in real time.

Because some configurations of Filebeat and Logstash are not backward compatible, the service may be unavailable after the update and upgrade, so here is an example
/etc/yum.confincrease
exclude=filebeat logstashThe configuration item is disabled
yum updateAutomatic update of.

The data presented

After Filebeat is pushed to Logstash, Elasticsearch stores data in the following format:

{" _index ":" nginx - WWW access - 2017.12 ", "_type" : "prod", "_source" : {" response_code ":" 200 ", "IP" : "106.11.152.143", "offset" : 81989257, "method" : "GET", "user_name" : "-", "input_type" : "log", "http_version" : "1.1", "read_timestamp" : "the 2017-12-21 T18:12:53. 604 z", "source" : "/ data/logs/fanhaobai.com.access.log", "needing" : { "name": "access", "module": "nginx" }, "type": "nginx-www-access", "url": "/2017/11/qconf-deploy.html", "referrer": "-", "@ the timestamp" : "the 2017-12-21 T18:12:53. 000 z", "@ version" : "1", "beat" : {" name ":" FHB ", "hostname" : "FHB", "version" : "5.6.5}", "the host" : "FHB", "body_sent" : {" bytes ":" 44067 "}, "fields" : {" env ":" prod "}}}Copy the code

In Kibana, it looks like:

Related Articles »

  • ELK Centralized Logging Platform — Platform Architecture (2017-12-16)
  • ELK Centralized Logging Platform 3 — Advanced (2017-12-22)