In a recent meeting, some exception handling and log output were discussed. Yes, these seemingly small things, as small as they can be, have disagreements. Log4j is generally understood, and the rest is largely unused. Our Leader also said he was a little rusty on large-scale logs like ELK after not using them for a long time. So today concludes the introduction to logging.

Log is indispensable for programmers, in our development process, after writing code to debug, log is a must, log can help us locate our problems, so as to better help us solve bugs. In this issue, Koha gives you a closer look at the four types of logging we use frequently to help you develop more efficiently.

Okay, let’s move on to our text.

Slf4j

Slf4j stands For Simple Loging Facade For Java. It is only a unified interface to provide logging output For Java programs. It is not a specific logging implementation solution, just a rule like JDBC. So slf4J alone will not work, ** must be used with other specific logging implementations such as apache org.apache.log4j.Logger, java.util.logging.logger and so on.

Simple syntax

SLF4J is not as widely used as Log4J because many developers are familiar with Log4J and don’t know about SLF4J, or don’t care about SLF4J and stick with Log4J. Let’s look at an example of Log4J:

Logger.debug("Hello " + name);
Copy the code

Due to the problem of string concatenation, using the above statements will first string concatenation, based on the current level is lower than the debug again to decide whether to output this log, even without the log output, string concatenation operation will be performed, so many companies are forced to use the following statement, so only when the current in the debug level would perform string concatenation:

If (logger.isDebugenabled ()) {logger.debug (" Hello "+ name); }Copy the code

It avoids the string concatenation problem, but it’s a bit tedious, isn’t it? SLF4J, by contrast, provides the following simple syntax:

LOGGER.debug("Hello {}", name);
Copy the code

It is similar in form to the first example, without the string concatenation problems, and not as verbose as the second.

Log Level Level

Slf4j has four log levels to choose from, from top to bottom, with the highest priority being printed out.

  • The Debug:

    Simply put, information that is useful for program debugging can be debug output

  • Info:

    Information useful to the user

  • Warn:

    May result in incorrect information

  • Error:

    As the name suggests, where something went wrong

use

Because ** is mandatory, ** is created directly using LoggerFactory

import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Test { private static final Logger logger = LoggerFactory.getLogger(Test.class); / /... }Copy the code

Configuration mode

Spring Boot supports SLF4J well. Slf4j has been integrated with Spring Boot. Slf4j has been configured for Spring Boot. The application. Yml file is the only one that needs to be configured in Spring Boot. At the beginning of the project, I used the application. ** seems more intuitive, but yML files have high format requirements, such as the colon must be followed by a space, otherwise the project will not start, and no errors will be reported. Use properties or YML depending on your personal habits.

Let’s take a look at the logging configuration in the application.yml file:

logging:
  config: classpath:logback.xml
  level:
    com.bowen.dao: trace
Copy the code

Logging. config is used to specify which configuration file to read when the project is started. In this case, the log configuration file is classpath:logback. XML. Logging. level is used to specify the log output level of a specific Mapper. The above configuration indicates that all mapper log output level of com.bowen.dao package is trace. In the production environment, you can set this log level to Error.

The log levels are ERROR, WARN, INFO, and DEBUG in descending order.

Log4j

Log4j is an open source project of Apache. By using Log4j, you can control the destination of log messages to the console, files, GUI components, even the socket server, NT event logger, UNIX Syslog daemon, etc. We can also control the output format of each log; By defining the level of each log message, we can more carefully control the log generation process.

Component architecture

Log4j consists of three important components: Loggers (Loggers), Appenders (outputs), and a log formatter (Layout).

Logger:

Controls which logging statements to enable or disable and levels log information

Appenders:

Specifies whether logs will be printed to the console or to a file

Layout:

Controls the display format of log information

Log4j defines five levels of Log information to be output: DEBUG, INFO, WARN, ERROR, and FATAL. Only logs of higher levels can be output. In this way, you can configure the content to be output in different situations. Without changing the code.

Log Level Level

Log4j has the following log levels:

  • Off:

    Disable log, highest level, no log output

  • Fatal:

    Catastrophic error, the highest of all levels capable of logging

  • Error:

    Error, generally used for exception information

  • Warn:

    Warning, generally used for information such as improper references

  • Info:

    General information

  • The debug:

    Debugging information, generally used during program execution

  • Trace:

    Stack information, generally not used

  • All:

    Open all logs, lowest level, all logs can be used

In the Logger core class, each logging level except off/all corresponds to a set of overloaded methods for logging different levels. Logs are logged only when the log level corresponding to the method is greater than or equal to the specified log level.

use

To use Log4j, you only need to import a JAR package

Log4j </groupId> <artifactId>log4j</artifactId> <version>1.2.9</version> </dependency>Copy the code

Configuration mode

Create a log4j.properties configuration file in the Resources Root directory with the correct location and file name, and add the configuration information to the properties file

log4j.rootLogger=debug,cons

log4j.appender.cons=org.apache.log4j.ConsoleAppender 
log4j.appender.cons.target=System.out  
log4j.appender.cons.layout=org.apache.log4j.PatternLayout 
log4j.appender.cons.layout.ConversionPattern=%m%n
Copy the code

The propertis file is the most common configuration method. In the actual development process, the properties file is basically used. The pripertis configuration file can be configured as follows:

Log4j. rootLogger= Log level,AppenderA,AppenderB... # -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- to define a appender -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - # define an appender appender names can be arbitrary, # if you want to make the appender to take effect, Must be added to the previous line rootLogger, Behind for the corresponding log4j appenders class. Appender. Appender name = org. Apache log4j. ConsoleAppender log4j.. Appender Appender name. Target = System. Out # Defined log4j appenders layout way. Appender. Appender name. Layout = org.. Apache log4j. SimpleLayoutCopy the code

Logback

Simply put, Logback is a logging framework for the Java domain. It is considered the successor to Log4J. Logback is an upgrade to log4j, so logback has many advantages over Log4j.

modules

Logback consists of three modules logback-core, logback-classic, and logback-access.

Logback-core is the infrastructure on which other modules are built, and obviously logback-core provides some key common mechanisms.

Logback-classic has the same status and functions as Log4J. It is also considered an improved version of Log4J, and it implements simple logging facade SLF4J.

Logback-access is primarily a module that interacts with Servlet containers, such as Tomcat or Jetty, and provides some functionality related to HTTP access.

Logback components

The Logback components are as follows:

  • The Logger:

    A recorder of logs; Associate it with the context of the application; Mainly used to store log objects. You can customize log type levels

  • Appender:

    Used to specify the log output destination; Destinations can be consoles, files, databases, etc

  • Layout:

    Is responsible for converting events to strings; Output of formatted log information. Layout objects are wrapped in encoder in logback

Advantages of Logback

Logback has the following advantages:

  • The same code path, Logback executes faster
  • More extensive testing
  • The SLF4J API is implemented natively (Log4J also requires an intermediate transformation layer)
  • Richer documentation
  • XML or Groovy configuration is supported
  • The configuration file is automatically hot loaded
  • Gracefully recover from IO errors
  • Automatically deletes the log archive
  • Automatically compress logs into archive files
  • Support for Prudent mode, enabling multiple JVM processes to record the same log file
  • Support to add criteria to configuration files to adapt to different environments
  • More powerful filters
  • Support for SiftingAppender (selectable Appender)
  • The exception stack information contains package information

Tag attributes

Logback has the following attributes:

  • The configuration:

    The configured root node

  • Scan:

    When true, the configuration file will be scanned and reloaded if its properties change. The default value is true

  • ScanPeriod:

    Check whether there is an interval for modifying the configuration file. If no time unit is given, the default unit is milliseconds. The default time is 1 minute. This parameter is valid when scan=”true”

  • The debug:

    If the value is true, internal logs of logBack are displayed to check the running status of logback in real time. The default value is false

  • ContextName:

    Context name, which defaults to “default” and can be set to another name using this label to distinguish records from applications; Once set cannot be modified

  • Appender:

    The child node of Configuration, the component responsible for writing logs, has name and class attributes

  • Name:

    The name of the addender

  • Class:

    The fully qualified name of an appender is the name of a specific appender class, such as ConsoleAppender or FileAppender

  • Append:

    When true, the log is appended to the end of the file. If flase, the existing file is emptied. The default value is true

Configuration mode

The logback framework loads a configuration file named logback-spring or logback in the classpath by default:

<? The XML version = "1.0" encoding = "utf-8"? > <configuration> <property resource="logback.properties"/> <appender name="CONSOLE-LOG" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>[%d{yyyy-MM-dd' 'HH:mm:ss.sss}] [%C] [%t] [%L] [%-5p] %m%n</pattern> </layout> </appender> <! - get higher than info (including info), but in addition to the level of the error LOG - > < appender name = "info - LOG" class = "ch. Qos. Logback. Core. Rolling. RollingFileAppender" >  <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>ERROR</level> <onMatch>DENY</onMatch> <onMismatch>ACCEPT</onMismatch> </filter> <encoder> <pattern>[%d{yyyy-MM-dd' 'HH:mm:ss.sss}] [%C] [%t] [%L] [%-5p] %m%n</pattern> </encoder> <! - rolling strategy - > < rollingPolicy class = "ch. Qos. Logback. Core. Rolling. TimeBasedRollingPolicy" > <! -- path --> <fileNamePattern>${LOG_INFO_HOME}//%d.log</fileNamePattern> <maxHistory>30</maxHistory> </rollingPolicy> </appender> <appender name="ERROR-LOG" class="ch.qos.logback.core.rolling.RollingFileAppender"> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <level>ERROR</level> </filter> <encoder> <pattern>[%d{yyyy-MM-dd'  'HH:mm:ss.sss}] [%C] [%t] [%L] [%-5p] %m%n</pattern> </encoder> <! - rolling strategy - > < rollingPolicy class = "ch. Qos. Logback. Core. Rolling. TimeBasedRollingPolicy" > <! </fileNamePattern> ${LOG_ERROR_HOME}//%d.log</fileNamePattern> <maxHistory>30</maxHistory> </rollingPolicy> </appender> <root level="info"> <appender-ref ref="CONSOLE-LOG" /> <appender-ref ref="INFO-LOG" /> <appender-ref ref="ERROR-LOG" /> </root> </configuration>Copy the code

ELK

ELK is short for Elasticsearch, Logstash, and Kibana. These three software and their related components can create the largest real-time log processing system. ** added a new FileBeat, which is a lightweight log collection and processing tool (Agent). FileBeat consumes less resources and is suitable for collecting logs from various servers and transferring them to Logstash. This tool is also recommended by the official.

  • Elasticsearch:

    Is a Lucene-based, full-text indexing distributed storage and indexing engine that indexes logs and stores them for business users to retrieve queries.

  • Logstash:

    The log collection, filtering, and forwarding middleware collects, filters, and forwards all types of logs on service lines to Elasticsearch for further processing.

  • Kibana:

    Elasticsearch is a visualization tool that allows you to query Elasticsearch data and display it in a visual way, such as pie charts, histograms, area charts, etc.

  • Filebeat:

    Beats is a lightweight log collection and processing tool. Currently Beats includes four tools: Packetbeat (collects network traffic data), Topbeat (collects system, process and file system level CPU and memory usage data), Filebeat (collects file data), Winlogbeat (collects Windows event log data)

The main features

A complete centralized log system must contain the following main features:

  • Collection:

    Collect log data from multiple sources

  • Transfer:

    Stable transfer of log data to the central system

  • Storage:

    How do I store log data

  • Analysis:

    Support for UI analysis

  • Warning:

    Ability to provide error reporting and monitoring mechanisms

ELK provides a complete set of solutions, and are open source software, with the use of each other, perfect connection, efficient to meet the application of many occasions. The current mainstream of a log system.

Application scenarios

In the operation and maintenance of a massive log system, the following aspects are essential:

  • Centralized query and management of distributed log data;
  • System monitoring, including the monitoring of system hardware and application components;
  • Troubleshooting;
  • Security information and event management;
  • Report function;

ELK runs on top of ** distributed system. ** Centralized management and quasi-real-time search and analysis of massive system and component logs through collection, filtering, transmission and storage, using easy-to-use functions such as search, monitoring, event messages and reports. Help o&M personnel to monitor online services in quasi-real time, locate and troubleshoot service anomalies, track and analyze bugs during program development, analyze business trends, audit security and compliance, and dig deeply into the value of big data of logs. Elasticsearch also provides a variety of apis (such as REST, JAVA and PYTHON) for users to expand and develop to meet different requirements.

Configuration mode

To configure fileBeat, open filebeat. Yml and configure it as follows:

Filebeat. Input: -type: log enabled: true # Input source file address path: Multiline: pattern: '\s*\[' negate: true match: '/data/logs/tomcat/*.log' After # tags: ["tomcat"] # export target, you can change this to es output.logstash: hosts: [172.29.12.35.5044]Copy the code

Conf file, or open the config folder. Conf file, where you can start multiple configuration files at the same time. There is also a powerful filter function, which can filter raw data, as follows:

# input source (must) input {# console type stdin {} # file read file {# similar to the given name type => "info" # file path, => ['/usr/local/logstash-7.9.1/config/data-my.log'] Start_position => "beginning"} file {type => "error" path => [' /usr/local/logstuck-7.9.1/config/data-my2.log '] start_position => "beginning" codec=>multiline{pattern => "\s*\[" Negate => true what => "previous"}} # beats{port => 5044}} # {hosts => "172.29.12.35:9200" #kibana select * from 'index'; Index => "log-error-%{+ YYYy.mm. Dd}"}} if [type] == "info"{elasticSearch {hosts => "172.29.12.35:9200" Index => "log-info-%{+ yyyy.mm. Dd}"}} # tomcat if "tomcat" in [tags]{ Stdout {codec => rubydebug {} } }Copy the code

Recommended reading

Why are Alibaba’s programmers growing so fast?

Take you from the entry to the master;

What exactly is Project Pegasus? Tens of thousands of programmers are fascinated by it

After watching three things

If you find this article helpful, I’d like to invite you to do three small favors for me:

Like, forward, have your “like and comment”, is the motivation of my creation.

Follow the public account “Java Doudi” to share original knowledge from time to time.

Also look forward to the follow-up article ing🚀