Hello, I’m Koha.

Happy New Year, everyone! During a recent meeting, we discussed some exception handling and log output. Yes, these seemingly trivial things, they diverge. This is because Log4j is widely understood and the rest is largely unused. Our Leader also indicated that large scale logs such as ELK were a little rusty due to the long absence of use. So today summarizes the introduction of the log.

Log is indispensable for programmers. In our development process, if we need to debug after writing the code, log is necessary. Log can help us locate our problems and better help us solve bugs. In this issue, Kohwa gives you a detailed look at ** four types of logs that we often use to help you improve your development efficiency.

Okay, let’s get into the main text.

Slf4j

Slf4j stands For Simple Loging Facade For Java. It is simply a unified interface that provides logging output For Java programs. It is not a concrete logging implementation, just a rule like JDBC. So slF4J alone will not work, ** must be combined with other specific logging implementations, ** such as Apache’s org.apache.log4j.Logger, JDK’s java.util.logging.logger, etc.

Simple syntax

SLF4J is less commonly used than Log4J because many developers are familiar with Log4J but not SLF4J, or do not care about SLF4J and stick with Log4J. Let’s take a look at the Log4J example:

Logger.debug("Hello " + name);
Copy the code

Due to the problem of string concatenation, using the above statements will first string concatenation, based on the current level is lower than the debug again to decide whether to output this log, even without the log output, string concatenation operation will be performed, so many companies are forced to use the following statement, so only when the current in the debug level would perform string concatenation:

If (logger.isdebugenabled ()) {logger.debug (" Hello "+ name); }Copy the code

It avoids string concatenation, but it’s a bit tedious, isn’t it? In contrast, SLF4J provides the following simple syntax:

LOGGER.debug("Hello {}", name);
Copy the code

It is similar in form to the first example, but without the string concatenation problems, and is not as cumbersome as the second example.

Log Level Level

Slf4j has four log levels to choose from, from top to bottom, with the highest priority being printed.

  • Debug: Simply put, information that is beneficial to program debugging can be Debug output

  • Info: information that is useful to the user

  • Warn: Warn indicates information that may cause errors

  • Error: As the name suggests, the place where an error occurred

use

Since ** is a mandatory specification, ** is created directly using LoggerFactory

import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Test { private static final Logger logger = LoggerFactory.getLogger(Test.class); / /... }Copy the code

Configuration mode

Spring Boot has good support for SLF4J. Slf4j is integrated with Spring Boot internally and is usually configured when you use slF4J. The application. Yml file is the only file that needs to be configured in Spring Boot. At the beginning of the project creation, it is the application. ** looks more intuitive, but yML files have high format requirements, such as English colon must be followed by a space, otherwise the project is estimated to fail to start, and no error. Properties or YML, depending on your personal habits, is fine.

Let’s look at the log configuration in the application.yml file:

logging:
  config: classpath:logback.xml
  level:
    com.bowen.dao: trace
Copy the code

Logging. config specifies which configuration file to read when the project starts. In this case, you specify that the logback configuration file is classpath:logback. XML. Logging. level specifies the output level of mapper logs. The above configuration indicates that all mapper logs under com.bowen.dao are at trace level and will print SQL that is used to operate the database. Set this log level to Trace for locating problems in development, and set this log level to Error in production.

Common log levels are ERROR, WARN, INFO, and DEBUG in descending order.

Log4j

Log4j is an open source project of Apache. By using Log4j, you can control the destination of logging information to consoles, files, GUI components, even socket servers, NT event loggers, UNIX Syslog daemons, etc. We can also control the output format of each log; By defining the level of each log message, we can control the log generation process in more detail.

Component architecture

Log4j consists of three important components: Loggers, Appenders, and Layout.

Logger: Controls which logging statements are enabled or disabled and limits the level of log information

Appenders: specifies whether logs will be printed to the console or to a file

Layout: controls the display format of log information

Log4j defines five levels of Log information to be output, which are DEBUG, INFO, WARN, ERROR, and FATAL. When output, only the information whose level is higher than the specified level can be output. In this way, it is convenient to configure the content to be output in different cases. Without changing the code.

Log Level Level

Log4j log levels are as follows:

  • Off: Logs are disabled. No logs can be output

  • Fatal: Indicates a catastrophic error, which is the highest of all levels that can output logs

  • Error: Indicates an error, usually used for exception information

  • Warn: warn, which is used for non-standard reference information

  • Info: indicates common information

  • Debug: Debugging information, usually used during program execution

  • Trace: indicates the stack information. This parameter is not used

  • All: All logs are enabled. All logs are available at the lowest level

In the Logger core class, with the exception of off/all, each logging level ** corresponds to a set of overloaded methods ** for logging different levels. Logs are recorded only when the log level corresponding to the method is greater than or equal to the specified log level.

use

To use Log4j, you only need to import a JAR package

<dependency> <groupId>org.log4j</groupId> </artifactId> log4j</artifactId> <version>1.2.9</version> </dependency>Copy the code

Configuration mode

Create a log4j.properties configuration file in the Resources Root directory with the correct location and filename, and add the configuration information in the properties file

log4j.rootLogger=debug,cons

log4j.appender.cons=org.apache.log4j.ConsoleAppender 
log4j.appender.cons.target=System.out  
log4j.appender.cons.layout=org.apache.log4j.PatternLayout 
log4j.appender.cons.layout.ConversionPattern=%m%n
Copy the code

The propertis file is the most common configuration. In the actual development process, the properties file is basically used. The pripertis configuration file can be configured as follows:

AppenderA is the name of the Appender defined by log4j.rootLogger= log level,AppenderA,AppenderB,... # -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- to define a appender -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - # define an appender appender names can be arbitrary, # if you want to make the appender to take effect, Must be added to the previous line of rootLogger, Behind for the corresponding log4j appenders class. Appender. Appender name = org. Apache log4j. ConsoleAppender log4j.. Appender Appender name. Target = System. Out # Defined log4j appenders layout way. Appender. Appender name. Layout = org.. Apache log4j. SimpleLayoutCopy the code

Logback

Simply put, Logback is a logging framework for the Java domain. It is considered the successor to Log4J. Logback is a ** upgrade to Log4j, ** so logback naturally has many advantages over log4j.

modules

Logback consists of three modules: logback-core, Logback-Classic, and logback-Access.

Logback-core is the infrastructure on which other modules are built, and obviously logback-core provides some key common mechanisms.

Logback-classic is considered an improved version of Log4J, and it implements the simple logging facade SLF4J.

Logback-access is primarily a module that interacts with a Servlet container, such as Tomcat or Jetty, and provides some functionality related to HTTP access.

Three modules

Logback components

The Logback components are as follows:

  • Logger: Logger of a log; I’m going to attach it to the context of the application; Mainly used to store log objects; You can customize the log type level

  • Appender: to specify a destination for log output; Destinations can be consoles, files, databases, etc

  • Layout: converts events to strings; Output of formatted log information. In Logback the Layout object is encapsulated in the encoder

Advantages of Logback

Logback has the following advantages:

  • Logback executes faster for the same code path

  • More adequate testing

  • The SLF4J API is implemented natively (Log4J also requires an intermediate transformation layer)

  • Richer documents

  • Support XML or Groovy configuration

  • The configuration file is automatically hot loaded

  • Elegant recovery from IO errors

  • Automatically delete log archives

  • Automatically compress logs into archive files

  • Supports Prudent mode to enable multiple JVM processes to log the same log file

  • Add criteria to the configuration file to adapt to different environments

  • More powerful filters

  • Support for SiftingAppender (filter Appender)

  • Exception stack information with package information

Tag attributes

The Logback attributes are as follows:

  • Configuration: indicates the configured root node

configuration

  • Scan: If the value is true, the configuration file will be scanned and reloaded if the configuration file properties change. The default value is true

  • ScanPeriod: indicates the interval for monitoring whether the configuration file is modified. If the unit of time is not specified, the default unit is millisecond. The default time is 1 minute. Valid when scan=”true”

  • Debug: If the value is true, the system displays internal logback logs to check the running status of the logback in real time. The default value is false

  • ContextName: contextName (default: “default”). Use this tag to set it to another name to distinguish between records of different applications. Once set, it cannot be modified

  • Appender: a child node of a configuration module that is responsible for writing logs. It has name and class attributes

  • Name: Indicates the name of addender

  • Class: The fully qualified name of an appender, which is the name of a specific appender class, such as ConsoleAppender or FileAppender

  • Append: When true, the log is appended to the end of the file. If flase, the existing file is emptied. The default value is true

Configuration mode

The Logback framework loads a classpath configuration file named logback-spring or logback by default:

<? The XML version = "1.0" encoding = "utf-8"? > <configuration> <property resource="logback.properties"/> <appender name="CONSOLE-LOG" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>[%d{yyyy-MM-dd' 'HH:mm:ss.sss}] [%C] [%t] [%L] [%-5p] %m%n</pattern> </layout> </appender> <! - get higher than info (including info), but in addition to the level of the error LOG - > < appender name = "info - LOG" class = "ch. Qos. Logback. Core. Rolling. RollingFileAppender" >  <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>ERROR</level> <onMatch>DENY</onMatch> <onMismatch>ACCEPT</onMismatch> </filter> <encoder> <pattern>[%d{yyyy-MM-dd' 'HH:mm:ss.sss}] [%C] [%t] [%L] [%-5p] %m%n</pattern> </encoder> <! - rolling strategy - > < rollingPolicy class = "ch. Qos. Logback. Core. Rolling. TimeBasedRollingPolicy" > <! <fileNamePattern>${LOG_INFO_HOME}//%d.log</fileNamePattern> <maxHistory>30</maxHistory> </rollingPolicy> </appender> <appender name="ERROR-LOG" class="ch.qos.logback.core.rolling.RollingFileAppender"> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <level>ERROR</level> </filter> <encoder> <pattern>[%d{yyyy-MM-dd'  'HH:mm:ss.sss}] [%C] [%t] [%L] [%-5p] %m%n</pattern> </encoder> <! - rolling strategy - > < rollingPolicy class = "ch. Qos. Logback. Core. Rolling. TimeBasedRollingPolicy" > <! <fileNamePattern>${LOG_ERROR_HOME}//%d.log</fileNamePattern> </maxHistory> 30</maxHistory> </rollingPolicy> </appender> <root level="info"> <appender-ref ref="CONSOLE-LOG" /> <appender-ref ref="INFO-LOG" /> <appender-ref ref="ERROR-LOG" /> </root> </configuration>Copy the code

ELK

ELK is short for Elasticsearch, Logstash, and Kibana. These three apps and their related components can be used to build the largest real-time log processing system. ** Added a new FileBeat, which is a lightweight log collection and processing tool (Agent). FileBeat takes less resources and is suitable for collecting logs on various servers and then sending them to Logstash. The official also recommends this tool.

  • Elasticsearch is a distributed storage and indexing engine based on Lucene that supports full-text indexing. It is mainly used to index logs and store them for business retrieval.

  • Logstash: a middleware used to collect, filter, and forward logs. It collects, filters, and forwards logs of various business lines to Elasticsearch for further processing.

  • Kibana is a visual tool that queries Elasticsearch data and presents it to businesses in a visual way, such as pie charts, histographs, and area charts.

  • Filebeat: Beats is a lightweight log collection and processing tool. Currently Beats contains four tools: Packetbeat (collects network traffic data), Topbeat (collects data such as CPU and memory usage at the system, process and file system levels), Filebeat (collects file data), and Winlogbeat (collects Windows event log data)

Architecture diagram

The main features

A complete centralized logging system should have the following main features:

  • Collection: Logs from multiple sources can be collected

  • Transfer: Can transfer log data to the central system in a stable manner

  • Storage: How do I store log data

  • Analysis: Can support UI analysis

  • Warning: Can provide error reporting, monitoring mechanism

ELK provides a complete set of solutions, and all are open source software, with the use of each other, perfect connection, efficient to meet the needs of many applications. The current mainstream of a log system.

Application scenarios

In the operation and maintenance of a massive log system, the following aspects are essential:

  • Centralized query and management of distributed log data;

  • System monitoring, including the monitoring of system hardware and application components;

  • Troubleshooting;

  • Safety information and incident management;

  • Report function;

ELK runs on the ** distributed system. ** centralized management, quasi-real-time search and analysis of massive system and component logs through collection, filtering, transmission and storage. It uses simple and easy to use functions such as search, monitoring, event messages and reports. Help the operation and maintenance personnel to conduct quasi-real-time monitoring of online business, timely locate the cause of abnormal business, troubleshoot faults, track and analyze bugs during program development, analyze business trend, audit security and compliance, and deeply dig the big data value of logs. At the same time, Elasticsearch provides a variety of apis (REST, JAVA, PYTHON, and other apis) for extension development to meet different requirements.

Configuration mode

Open fileBeat. yml and configure it as follows:

Input: -type: log enabled: true Multiline: pattern: '\s*\[' negate: true match: '\s*\[' negate: true match: Logstash: hosts: [172.29.12.35:5044]Copy the code

Logstash configuration, create a.conf file, or open the.conf file in the config folder. Multiple configuration files can be started at the same time, and there is a powerful filter function to filter raw data, as follows:

# input {# console type stdin {} # file read file {# similar given name type => "info" Path => ['/usr/local/logstash-7.9.1/config/data-my.log'] Start_position => "beginning"} file {type => "error" path => ['/usr/local/logstash-7.9.1/config/data-my2.log'] start_position => "beginning" codec=>multiline{pattern => "\s*\[" Negate => true what => "previous"}} # if [type] output {# if [type] {hosts => "172.29.12.35:9200"; Index => "log-error-%{+ YYYy.mm.dd}"}} if [type] == "info"{hosts => "172.29.12.35:9200" Index => "log-info-%{+ yyyy.mm.dd}"}} # filebates if "tomcat" in [tags]{ {hosts => "172.29.12.35:9200" # hosts => "172.29.12.35:9200" # hosts => "172.29.12.35:9200" # hosts => "172.29.12.35:9200" rubydebug {} } }Copy the code