This is the fourth day of my participation in the November Gwen Challenge. Check out the details: The last Gwen Challenge 2021

Today I want to introduce the Logstash tool. It is a great tool to use in my work recently. It is very friendly to log transfer and collection, and can easily synchronize data to ElasticSearch or Kafka. Synchronizing file data to ElasticSearch

Docker pull logstash: 6.4.0Copy the code

Km_log_pattern file:

STIME %{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:? %{MINUTE}:? %{SECOND},? %{MSECONDS}Copy the code

Logstash. Conf configuration parameters:

Read file data and write elasticSearch

input { file { path => ["/home/work/testVolume/test_map_log/*.log","/home/work/testVolume/test_map_log/*.log"] type => "test_map_new" start_position => "beginning" } } \ filter { grok { patterns_dir => ["/config-dir/cmap_log_pattern"] match => { "message" => [ "\[%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:minute}:%{SECOND:second},%{MSECONDS:mill_seconds}\]\[user_id:%{GREEDYDATA:user_id},mobile:%{GREEDY DATA:user_mobile},status:%{GREEDYDATA:user_status},real_name:%{GREEDYDATA:real_name},email:%{GREEDYDATA:user_email},city :%{GREEDYDATA:user_city},permission_info:%{GREEDYDATA:permission_info},b_stree_permission:%{GREEDYDATA:b_stree_permissio n},together_permission:%{GREEDYDATA:together_permission},is_admin:%{GREEDYDATA:is_admin}\]\[URL:%{GREEDYDATA:uri}\]\[par ams:%{GREEDYDATA:params_json_content}\]", "\[%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:minute}:%{SECOND:second},%{MSECONDS:mill_seconds}\]\[user_id:%{GREEDYDATA:user_id},mobile:%{GREEDY DATA:mobile},platformCompany:%{GREEDYDATA:platformCompany},real_name:%{GREEDYDATA:real_name},email:%{GREEDYDATA:email},c ity:%{GREEDYDATA:city},role:%{GREEDYDATA:role},platformCompany:%{GREEDYDATA:platformCompany}\]\[URL:%{GREEDYDATA:uri}\]\ [params:%{GREEDYDATA:params_json_content}\]", "\[%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:minute}:%{SECOND:second},%{MSECONDS:mill_seconds}\]\[user_id:%{GREEDYDATA:user_id}\]\[URL:%{GREEDY DATA:uri}\]\[params:%{GREEDYDATA:params_json_content}\]" ] } } json { source => "params_json_content" target => "Params_json" remove_field => [" Paramsjson "]}} \ output {elasticSearch {hosts => ["127.0.0.1:9200"] index => "test_log" user => "test" password => "xxxxx" } stdout { codec => line } }Copy the code

Read kafka data and write elasticSearch

input {
    kafka {
        bootstrap_servers => ["xxx.xxx.xxx.xxx:9092"]
        auto_offset_reset => "latest"
        consumer_threads => 5
        decorate_events => true
        group_id => "xxx"
        topics => ["xxxxxxxxxx"]
        type => "xxxxx"
    }
}

output {
    stdout {}
    elasticsearch {
          hosts => ["xxx.xxx.xxx.xxx:9200"]
          index => "kafka-xxx-%{+YYYY.MM.dd}"
    }
}

Copy the code

Start docker command:

docker run -d --name logstash_test  --log-opt max-size=10m --log-opt max-file=3  -v /config-dir:/config-dir -v /home/work/logstash_test/logstash:/home/work/logstash_test/logstash -v logstash -f /config-dir/logstash.conf

Copy the code

This is done by reading the file and writing elasticSearch. Another option is to deploy the LogStash service and other services make service calls to write elasticSearch

Related logstash. Conf configuration:

Input {TCP {host => "0.0.0.0" port => "5044" codec => json}} filter{if [type] == "logstash" {ruby {code => "event.set('timestamp', event.timestamp.time.localtime.strftime('%Y-%m-%d %H:%M:%S'))" } } } output { elasticsearch { hosts => ["xx.xx.xx.xx. Xx :9200","xx.xx.xx.xx. Xx :9200"] ["xx.xx.xx.xx. Xx :9200","xx.xx.xx.xx. Xx :9200"]  "json" } stdout { codec => json } }Copy the code

Start command:

docker run -it -d -p 5044:5044--name logstash --net somenetwork -v /docker/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml -v / docker logstash/conf. D/a: / usr/share/logstash/conf. D/logstash: 6.4.0Copy the code

Create a logStash service for elasticSearch by invoking xx.xx.xx.xx.xx :5044