Source: jiandansuifeng.blog.csdn.net/article/details/107361190

The overall process is as follows:

Preparing the server

Each server node is listed here, so that students can view the corresponding content against the node in the following article

The SpringBoot project is ready

Introduce log4j2 to replace the default SpringBoot log. The demo project structure is as follows:

pom

<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <! -- Exclusions --> <exclusions> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <! -- log4j2 --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <dependency> <groupId>com.lmax</groupId> The < artifactId > disruptor < / artifactId > < version > we < / version > < / dependency > < / dependencies >Copy the code

log4j2.xml

<? The XML version = "1.0" encoding = "utf-8"? > <Configuration status="INFO" schema=" log4J-v2.0.xsd "monitorInterval="600" > <Properties> <Property name="LOG_HOME">logs</Property> <property name="FILE_NAME">collector</property> <property name="patternLayout">[%d{yyyy-MM-dd'T'HH:mm:ss.SSSZZ}] [%level{length=5}] [%thread-%tid] [%logger] [%X{hostName}] [%X{ip}] [%X{applicationName}] [%F,%L,%C,%M] [%m] ## '%ex'%n</property> </Properties> <Appenders> <Console name="CONSOLE" target="SYSTEM_OUT"> <PatternLayout pattern="${patternLayout}"/> </Console> <RollingRandomAccessFile name="appAppender" fileName="${LOG_HOME}/app-${FILE_NAME}.log" filePattern="${LOG_HOME}/app-${FILE_NAME}-%d{yyyy-MM-dd}-%i.log" > <PatternLayout pattern="${patternLayout}" /> <Policies> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="500MB"/> </Policies> <DefaultRolloverStrategy max="20"/> </RollingRandomAccessFile> <RollingRandomAccessFile name="errorAppender" fileName="${LOG_HOME}/error-${FILE_NAME}.log" filePattern="${LOG_HOME}/error-${FILE_NAME}-%d{yyyy-MM-dd}-%i.log" > <PatternLayout pattern="${patternLayout}" /> <Filters> <ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/> </Filters> <Policies> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="500MB"/> </Policies> <DefaultRolloverStrategy max="20"/> </RollingRandomAccessFile> </Appenders> <Loggers> <! -- Business related asynchronous logger --> <AsyncLogger name="com.bfxy.*" level="info" includeLocation="true"> <AppenderRef ref="appAppender"/> </AsyncLogger> <AsyncLogger name="com.bfxy.*" level="info" includeLocation="true"> <AppenderRef ref="errorAppender"/> </AsyncLogger> <Root level="info"> <Appender-Ref ref="CONSOLE"/> <Appender-Ref ref="appAppender"/>  <AppenderRef ref="errorAppender"/> </Root> </Loggers> </Configuration>Copy the code

IndexController

Test Controller to print logs for debugging

@Slf4j @RestController public class IndexController { @RequestMapping(value = "/index") public String index() { InputMDC.putMDC(); Log.info (" I am an info log "); Log.warn (" I am a warn log "); Log. error(" I am an error log "); return "idx"; } @RequestMapping(value = "/err") public String err() { InputMDC.putMDC(); try { int a = 1/0; } catch (Exception e) {log.error(" arithmetic Exception ", e); } return "err"; }}Copy the code

InputMDC

To obtain the values of [%X{hostName}], [%X{IP}] and [%X{applicationName}] fields in log

@Component public class InputMDC implements EnvironmentAware { private static Environment environment; @Override public void setEnvironment(Environment environment) { InputMDC.environment = environment; } public static void putMDC() { MDC.put("hostName", NetUtil.getLocalHostName()); MDC.put("ip", NetUtil.getLocalIp()); MDC.put("applicationName", environment.getProperty("spring.application.name")); }}Copy the code

NetUtil

public class NetUtil { public static String normalizeAddress(String address){ String[] blocks = address.split("[:]"); if(blocks.length > 2){ throw new IllegalArgumentException(address + " is invalid"); } String host = blocks[0]; int port = 80; if(blocks.length > 1){ port = Integer.valueOf(blocks[1]); } else { address += ":"+port; //use default 80 } String serverAddr = String.format("%s:%d", host, port); return serverAddr; } public static String getLocalAddress(String address){ String[] blocks = address.split("[:]"); if(blocks.length ! = 2){ throw new IllegalArgumentException(address + " is invalid address"); } String host = blocks[0]; int port = Integer.valueOf(blocks[1]); If (" 0.0.0.0 ". The equals (host)) {return String. The format (" % s: % d ", NetUtil getLocalIp (), the port). } return address; } private static int matchedIndex(String ip, String[] prefix){ for(int i=0; i<prefix.length; i++){ String p = prefix[i]; if("*".equals(p)){ //*, assumed to be IP if(ip.startsWith("127.") || ip.startsWith("10.") || ip.startsWith("172.") || ip.startsWith("192.")){ continue; } return i; } else { if(ip.startsWith(p)){ return i; } } } return -1; } public static String getLocalIp(String ipPreference) { if(ipPreference == null){ ipPreference = "*>10>172>192>127"; } String[] prefix = ipPreference.split("[> ]+"); try { Pattern pattern = Pattern.compile("[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+"); Enumeration<NetworkInterface> interfaces = NetworkInterface.getNetworkInterfaces(); String matchedIp = null; int matchedIdx = -1; while (interfaces.hasMoreElements()) { NetworkInterface ni = interfaces.nextElement(); Enumeration<InetAddress> en = ni.getInetAddresses(); while (en.hasMoreElements()) { InetAddress addr = en.nextElement(); String ip = addr.getHostAddress(); Matcher matcher = pattern.matcher(ip); if (matcher.matches()) { int idx = matchedIndex(ip, prefix); if(idx == -1) continue; if(matchedIdx == -1){ matchedIdx = idx; matchedIp = ip; } else { if(matchedIdx>idx){ matchedIdx = idx; matchedIp = ip; } } } } } if(matchedIp ! = null) return matchedIp; Return "127.0.0.1"; } catch (Exception e) {return "127.0.0.1"; } } public static String getLocalIp() { return getLocalIp("*>10>172>192>127"); } public static String remoteAddress(SocketChannel channel){ SocketAddress addr = channel.socket().getRemoteSocketAddress(); String res = String.format("%s", addr); return res; } public static String localAddress(SocketChannel channel){ SocketAddress addr = channel.socket().getLocalSocketAddress(); String res = String.format("%s", addr); return addr==null? res: res.substring(1); } public static String getPid(){ RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean(); String name = runtime.getName(); int index = name.indexOf("@"); if (index ! = -1) { return name.substring(0, index); } return null; } public static String getLocalHostName() { try { return (InetAddress.getLocalHost()).getHostName(); } catch (UnknownHostException uhe) { String host = uhe.getMessage(); if (host ! = null) { int colon = host.indexOf(':'); if (colon > 0) { return host.substring(0, colon); } } return "UnknownHost"; }}}Copy the code

Start the project, access the /index and/ERO interfaces, and you can see that two log files app-collector.log and error-collector.log are generated in the project

We deployed the Springboot service on the machine 192.168.11.31.

Kafka installed and enabled

Kafka kafka

Kafka.apache.org/downloads.h…

Kafka Broker installation process: Kafka broker installation process: Kafka Broker installation process: Kafka Broker installation process: Kafka Broker installation process: ZooKeeper

Tar -zxvf kafka_2.12-2.1.0.tgz -c /usr/local/ Kafka_2.12-2.1.0 / kafka_2.12 Vim/usr/local/kafka_2. 12 / config/server properties # # modify configuration: Broker. Id = 0 port = 9092 host name = 192.168.11.51 advertised. The host. The name = 192.168.11.51 The dirs = / usr/local/kafka_2. 12 / kafka - logs num. Partitions = 2 Zookeeper. Connect = 192.168.11.111:2181192168 11.112:2181192168 11.113:2181 # # build log folders: Mkdir /usr/local/kafka_2.12/kafka-logs # / usr/local/kafka_2. 12 / bin/kafka - server - start. Sh/usr/local/kafka_2. 12 / config/server properties &Copy the code

Create two topics

## Create topic kafka-topics --zookeeper 192.168.11.111:2181 --create --topic app-log-collector --partitions 1 --replication-factor 1 kafka-topics. Sh --zookeeper 192.168.11.111:2181 --create --topic error-log-collector --partitions  1 --replication-factor 1Copy the code

We can check the topic situation

Sh --zookeeper 192.168.11.111:2181 --topic app-log-test --describeCopy the code

You can see that the app-log-collector and error-log-collector topics have been successfully enabled

Filebeat installed and enabled

Filebeat download

CD /usr/local/software tar -zxvf filebeat-6.6.0-linux-x86_64.tar.gz -c /usr/local/cd /usr/local mv Filebeat 6.6.0 - Linux - x86_64 / filebeat - 6.6.0Copy the code

To configure FileBeat, refer to the following yML configuration file

Vim/usr/local/filebeat - 5.6.2 / filebeat. YmlCopy the code
###################### Filebeat Configuration Example ######################### filebeat.prospectors: - input_type: Paths: /usr/local/logs/app-collector.log "app-log" multiline: #pattern: '^ \ s * (\ d {4} | \ d {2}) \ - (\ d {2} | [a zA - Z] {3}) \ - (\ d {2} | \ d {4})' # specify matching expression (matching with the 2017-11-15 08:04:23: at the beginning of 889 time format string) the pattern: '^\[' # specifies the expression to match (match strings beginning with "{) negate: true # Matches to match: after # merges to the end of the previous line max_lines: 2000 # max_lines: 2000 # specifies the maximum number of lines timeout: Log fields: logbiz: collector logtopic: app-log-collector ## Kafka topic evn: dev - input_type: log paths: - /usr/local/logs/error-collector.log document_type: "error-log" multiline: #pattern: '^ \ s * (\ d {4} | \ d {2}) \ - (\ d {2} | [a zA - Z] {3}) \ - (\ d {2} | \ d {4})' # specify matching expression (matching with the 2017-11-15 08:04:23: at the beginning of 889 time format string) the pattern: '^\[' # specifies the expression to match (match strings beginning with "{) negate: true # Matches to match: after # merges to the end of the previous line max_lines: 2000 # max_lines: 2000 # specifies the maximum number of lines timeout: Fields: logbiz: collector logtopic: error-log-collector ## Kafka topic evn: Dev output.kafka: enabled: true hosts: ["192.168.11.51:9092"] Topic: '%{[fields.logTopic]}' partition. Hash: reachable_only: true compression: gzip max_message_bytes: 1000000 required_acks: 1 logging.to_files: trueCopy the code

Start filebeat:

Check whether the configuration is correct

CD /usr/local/filebeat-6.6.0./filebeat -c filebeat.yml -configtest ## Config OKCopy the code

Start the filebeat

/ usr/local/filebeat 6.6.0 / filebeat &Copy the code

Check whether the startup is successful

ps -ef | grep filebeat
Copy the code

You can see that FileBeat has been started successfully

Kafka logs 192.168.11.31:8001/index 192.168.11.31:8001/err You can see that app-log-collector-0 and error-log-collector-0 files have been generated, indicating that FileBeat has collected data for kafka.

Logstash installation

Let’s create a new folder under the logstash installation directory

mkdir scrpit
Copy the code

Then CD into the file and create a logstash-script.conf file

cd scrpit
vim logstash-script.conf
Copy the code
The ## multiline plug-in can also be used for other similar stack information, such as Linux kernel logs. Topics_pattern => "app-log-.*" bootstrap_Servers => "192.168.11.51:9092" codec => json Decorate_events => true #auto_offset_rest => "latest" group_id => "App-log-group"} kafka {## error-log- server topics_pattern => "error-log-.*" bootstrap_Servers => "192.168.11.51:9092" codec => json consumer_threads => 1 decorate_events => true #auto_offset_rest => "latest" group_id => "error-log-group" Filter}} {# # time zone conversion ruby {code = > "event. The set (' index_time 'event. The timestamp. Time. Localtime. Strftime (' % % y. m. % d))"} the if "App-log" in [fields][logTopic]{grok {## "\[%{NOTSPACE:currentDateTime}\] \[%{NOTSPACE:level}\] \[%{NOTSPACE:thread-id}\] \[%{NOTSPACE:class}\] \[%{DATA:hostName}\] \[%{DATA:ip}\] \[%{DATA:applicationName}\] \[%{DATA:location}\] \[%{DATA:messageInfo}\] ## (\ '\' | % {QUOTEDSTRING: throwable})]}} if the error - log "in [fields] [logtopic] {grok {# # expression match = > [" message", "\[%{NOTSPACE:currentDateTime}\] \[%{NOTSPACE:level}\] \[%{NOTSPACE:thread-id}\] \[%{NOTSPACE:class}\] \[%{DATA:hostName}\] \[%{DATA:ip}\] \[%{DATA:applicationName}\] \[%{DATA:location}\] \[%{DATA:messageInfo}\] ## (\ '\' | % {QUOTEDSTRING: throwable}) "]}}} # # test output to the console: the output {stdout {codec = > rubydebug}} # # elasticsearch: Output {if "app-log" in [fields][logTopic]{# es service address hosts => ["192.168.11.35:9200"] # username password User => "elastic" password => "123456" # javalog-app-service-2019.01.23 index => "app-log-%{[fields][logbiz]}-%{index_time}" # Javalog -app-service-2019.01.23 index => "app-log-%{[fields][logbiz]}-%{index_time}" http://192.168.11.35:9200/_nodes/http?pretty # by sniffing mechanism es cluster load balance send log messages sniffing = > true # logstash default take a mapping template, Template_overwrite => true}} if "error-log" in [fields][logtopic]{elasticSearch {hosts => ["192.168.11.35:9200"] user => "elastic" password => "123456" index => "error-log-%{[fields][logbiz]}-%{index_time}" sniffing => true template_overwrite => true } } }Copy the code

Start the logstash

/ usr/local/logstash 6.6.0 / bin/logstash -f/usr/local/logstash - 6.6.0 / script/logstash - script. ConfCopy the code

After the startup succeeds, we access 192.168.11.31:8001/ ERR again

You can see that the console starts printing logs

ElasticSearch and Kibana

ES and Kibana have never written a blog before, and there are many online materials, so you can search by yourself.

Once setup is complete, go to Kibana’s Management page at 192.168.11.35:5601 and select Management -> Kinaba-Index Patterns

Then Create index pattern

  • The index pattern inputapp-log-*
  • Time Filter Field Name Select currentDateTime

So we have successfully created the index.

If we go to 192.168.11.31:8001/err again, we can see that we have hit a log

Full log information is displayed

At this point, our complete log collection and visualization is complete!


Recommend 3 original Springboot +Vue projects, with complete video explanation and documentation and source code:

Build a complete project from Springboot+ ElasticSearch + Canal

  • Video tutorial: www.bilibili.com/video/BV1Jq…
  • A complete development documents: www.zhuawaba.com/post/124
  • Online demos: www.zhuawaba.com/dailyhub

【VueAdmin】 hand to hand teach you to develop SpringBoot+Jwt+Vue back-end separation management system

  • Full 800 – minute video tutorial: www.bilibili.com/video/BV1af…
  • Complete development document front end: www.zhuawaba.com/post/18
  • Full development documentation backend: www.zhuawaba.com/post/19

【VueBlog】 Based on SpringBoot+Vue development of the front and back end separation blog project complete teaching

  • Full 200 – minute video tutorial: www.bilibili.com/video/BV1af…
  • Full development documentation: www.zhuawaba.com/post/17

If you have any questions, please come to my official account [Java Q&A Society] and ask me