The Logging module is a standard module built into Python. It is mainly used to output run logs. You can set the log output level, log saving path, and log file rollback.

1.Logger object creation

The Logging module of Python provides a standard logging interface. You can use it to store logs in various formats, mainly for running logs. You can set the log output level, log saving path, and log file rollback

Logging consists of four parts:

  • Loggers: interface that can be called directly by an application. The app logs logs by calling the provided API
  • Handlers: Decides to assign logging to the correct destination
  • Filters: Filters log information to provide fine-grained judgment on whether to output logs
  • Formatters: Sets the format layout for the final record print

1.1 loggers

Logging.logger objects play three roles:

  • It exposes the application to several methods so that the application can write logs at run time
  • Logger objects determine how to process log messages based on their severity or based on filter objects (default filtering)
  • The Logger is also responsible for passing log information to the handlers

Loggers objects have parent-child relationships, and the parent object is root when there is no parent logger object. The parent-child relationship is corrected when there is a parent object. For example, logging.getLogger(“abc.xyz”) creates two Logger objects, one ABC parent and one XYZ child, and ABC has no parent, so its parent is root. But in reality ABC is a placeholder object (virtual log object), and there can be no handler to handle the log. However, root is not a placeholder object. If a log object is logged, its parent object will receive the log at the same time. Therefore, some users will log twice when they create a Logger because the logger is logged once and the root object is logged at the same time.

1.2 Handlers

Handlers distribute the messages sent by the Logger to the correct location. For example, send to console or file or both or somewhere else (process pipeline or whatever). It determines the behavior of each log and is the focus area that needs to be configured later.

Each Handler also has a log level. A Logger can have multiple handlers, which means a logger can pass logs to different handlers based on the log level. You can also pass to multiple Handlers at the same level so you have the flexibility to set them up according to your needs.

  • Types of Handlers
The name of the location role
StreamHandler logging.StreamHandler Logs are output to streams, which can be sys.stderr, sys.stdout, or files
FileHandler logging.FileHandler Logs are output to files
BaseRotatingHandler logging.handlers.BaseRotatingHandler Basic log rollback mode
RotatingHandler logging.handlers.RotatingHandler Log rollback mode: supports the maximum number of log files and the rollback of log files
TimeRotatingHandler logging.handlers.TimeRotatingHandler Log rollback mode: logs are rolled back in a specified time range
SocketHandler logging.handlers.SocketHandler Remote output log to TCP/IP Sockets
DatagramHandler logging.handlers.DatagramHandler Remote output log to UDP Sockets
SMTPHandler logging.handlers.SMTPHandler Remotely outputting logs to email addresses
SysLogHandler logging.handlers.SysLogHandler Logs are output to syslog
NTEventLogHandler logging.handlers.NTEventLogHandler Remotely outputting logs to Windows NT/2000/XP event logs
MemoryHandler logging.handlers.MemoryHandler The log is output to the specified buffer in memory
HTTPHandler logging.handlers.HTTPHandler Remote output to HTTP server via “GET” or “POST”
  • Log rollback (RotatingFileHandler)

If you log with FileHandler, the file size increases over time. Eventually it will take up all your disk space. To avoid this, you can use RotatingFileHandler instead of FileHandler in your build environment.

import logging from logging.handlers import RotatingFileHandler logger = logging.getLogger(__name__) Logger.setlevel (level = logging.info) # Define a RotatingFileHandler to back up up to 3 log files. Maximum 1K per log file rHandler = RotatingFileHandler("log.txt",maxBytes = 1*1024,backupCount = 3) rHandler.setlevel (logging.info) formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') rHandler.setFormatter(formatter) console = logging.StreamHandler() console.setLevel(logging.INFO) console.setFormatter(formatter) logger.addHandler(rHandler) logger.addHandler(console) logger.info("Start print log") logger.debug("Do something") logger.warning("Something maybe fail.") logger.info("Finish")Copy the code

1.3 Filters

Filters provide finer-grained judgment to determine whether or not a log needs to be printed. In principle, if a handler obtains a log, it must be processed uniformly based on the level, but if the handler has a Filter, it can perform additional processing and judgment on the log For example, Filter can intercept logs from a specific source or modify or even modify the log level (after modification, determine the level).

Both Loggers and handlers can install filters or even install multiple filters in series

1.4 Formatters

Formatters specify the format layout for the final print of a particular record. Formatter concatenates the message into a concrete string, and Format simply prints %(messages) by default. There are some built-in LogRecord properties available in Format, as shown in the following table:

The attribute name format instructions
name %(name)s Log name
asctime %(asctime)s Readable time. The default format is’ 2003-07-08 16:49:45,896 ‘, followed by a comma in milliseconds
filename %(filename)s File name, part of pathName
pathname %(pathname)s Full path name of the file
funcName %(funcName)s The call log corresponds to the method name
levelname %(levelname)s Log level
levelno %(levelno)s Digitized log level
lineno %(lineno)d The number of lines logged in the source code
module %(module)s Module name
msecs %(msecs)d The millisecond portion of time
process %(process)d The ID of the process
processName %(processName)s Process name
thread %(thread)d The thread ID
threadName %(threadName)s Thread name
relativeCreated %(relativeCreated)d The relative time when the log was created, in milliseconds

A Handler can only have one Formatter, so if you want to implement output in multiple formats you can only use multiple handlers

2 Log Level

You can set a log level for a Logger Instance. Log messages lower than this level are ignored. You can also set a log level for a Hanlder.

  • FATAL: FATAL error
  • CRITICAL: A particularly bad event, such as running out of memory or empty disk space, that is rarely used
  • ERROR: Indicates that an ERROR occurs, such as an I/O operation failure or connection failure
  • WARNING: An important event occurs, but it is not an error, such as an incorrect user login password
  • INFO: Handles routine tasks such as requests or status changes
  • DEBUG: DEBUG level used during debugging, such as the intermediate state of each loop in an algorithm

3 Common Functions

3.1 logging. BasicConfig (/ * * kwargs)

To configure basic information for the log module,kwargs supports the following keyword parameters

  • Filename: specifies the log filename.
  • Filemode: in the same sense as file, specify the opening mode of the log file, ‘w’ or ‘a’;
  • Format: Specifies the format and content of the output. Format can output a lot of useful information.
  • Datefmt: specifies the time format, same as time.strftime();
  • Level: Sets the log level. The default value is logging.warnning.
  • Stderr, sys.stdout, or a file. The default output is sys.stderr. When both stream and filename are specified, stream is ignored.
import logging logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(message)s') logging.debug('this is debug message') logging.info('this is info Warning ('this is warning message') result: 2017-08-23 14:22:25, 713-root - this is info message 2017-08-23 14:22:25, 713-root - this is info message 2017-08-23 14:22:25, 713-root - this is info message 2017-08-23 14:22:25,714 - root - This is warning messageCopy the code

3.2 logging. GetLogger ([name])

Create a Logger object. Logging is primarily done by Logger objects. Provide the name of the Logger when calling getLogger. (Note that multiple calls to getLogger with the same name return a reference to the same object.) Logger instances have hierarchical relationships that are represented by Logger names, such as:

P = logging. GetLogger (” root “)

C1 = logging. GetLogger (” root. C1)”

C2 = logging. GetLogger (” root. C2)”

In this example, P is the parent logger, c1 and c2 are the child loggers of P, respectively. C1, C2 will inherit the Settings of P. If the name parameter is omitted, getLogger returns the root Logger in the log object hierarchy.

3.3 the logger. SetLevel (LVL)

Set the log level. Log messages lower than this level are ignored.

3.4 logging. GetLevelName (LVL)

Gets the name of the log level

4. Use of logging

4.1 Basic Usage Configuration

import logging from logging.handlers import TimedRotatingFileHandler import os from com_func.confread import config from Com_func.getpath import testPath # Create logger instance logger = logging.getLogger(__name__) # set the input log level logger.setLevel(config.get("LOGGING", "Level")) # set the log output format mat = "% (asctime) s - [% (filename) s - > line: % d] (lineno) - % (levelname) s: %(message)s" logger_mat = logg.formatter (mat) # Output log to file handler_file = TimedRotatingFileHandler(os.path.join(testpath.LOG_DIR_PATH, config.get("FILE_NAME", "log")), when = "m", backupCount=3, encoding="utf-8") handler_file.setLevel(config.get("LOGGING", "File_level ")) handler_file. SetFormatter (logger_mat) # Output log to console handler_sh = logging.streamHandler () handler_sh.setLevel(config.get("LOGGING", "Stream_level ")) handler_sh.setFormatter(logger_mat) # load logging logger. AddHandler (handler_file) Logger. AddHandler (handler_sh) # Return loggerCopy the code

4.2 to capture the traceback

Python’s Traceback module is used to log exceptions, and we can log tracebacks in logger

import logging logger = logging.getLogger(__name__) logger.setLevel(level = logging.INFO) handler = logging.FileHandler("log.txt") handler.setLevel(logging.INFO) formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) console = logging.StreamHandler() console.setLevel(logging.INFO) logger.addHandler(handler) logger.addHandler(console) logger.info("Start print log") logger.debug("Do something") logger.warning("Something maybe fail.") try: open("sklearn.txt","rb") except (SystemExit,KeyboardInterrupt): raise except Exception: logger.error("Faild to open sklearn.txt from logger.error",exc_info = True) logger.info("Finish") Logger.exception (MSG,_args) can also be used, Error ("Faild to open sklearn.txt from logger.error",exc_info = True) Replace logger.exception("Failed to open sklearn.txt from Logger. exception")Copy the code

4.3 Logging is used for multiple modules

First define logger’mainModule’ in the mainModule and configure it so that the objects obtained from getLogger(‘mainModule’) elsewhere in the interpreter process are the same and can be used directly without reconfiguration. Any child logger that starts with ‘mainModule’ is a child logger, such as ‘mainModule.sub’. You can generate a root logger, such as ‘PythonAPP’, and then load the logging configuration in the main function via fileConfig. You can then log using children of the root Logger, such as’ pythonApp. Core’, ‘pythonapp. Web’, in different modules elsewhere in the application, without having to repeatedly define and configure loggers for each module.

import logging
 
module_logger = logging.getLogger("mainModule.sub")
class SubModuleClass(object):
    def __init__(self):
        self.logger = logging.getLogger("mainModule.sub.module")
        self.logger.info("creating an instance in SubModuleClass")
    def doSomething(self):
        self.logger.info("do something in SubModule")
        a = []
        a.append(1)
        self.logger.debug("list a = " + str(a))
        self.logger.info("finish something in SubModuleClass")
 
def som_function():
    module_logger.info("call function some_function")
Copy the code

4.4 Python Logging Problem of repeated log writing

On the second call to log, get the same logger as the name in getLogger(name). This logger already has a handler that you added the first time, and a handler that you added the second time. So, this logger has two of the same handlers in it, and so on, and so forth, and so forth, and so forth, and so forth, and so forth, and so forth, and so forth, and so forth, and so forth.

Solutions:

  • Every time you create a logger with a different name, it is a new logger. There is no problem with adding multiple handlers
  • After each log, use removeHandler() to remove handlers from the logger.
  • Check in the log method that if the logger already has a handler, no handler will be added.
  • As in method 2, remove the handler from logger’s handler list with a pop.