Python’s Logging module enables a flexible logging system. The entire module is only 3 classes, less than 5000 lines of code, learning it can deepen the understanding of the program log, this article is divided into the following parts:

  • Logging profile
  • Logging API design
  • Logger object Logger
  • Log object LogRecord
  • Processor object Hander
  • Formatter object Formatter
  • Scroll log file handler
  • summary
  • tip

Logging profile

This code uses The Python 3.8.5 version, the official Chinese document 3.8.8. The official Chinese document in the reference link is very detailed. It is recommended to read it first to understand how to use logs.

class function
logging-module The logging API
Logger Logger object class that creates an object for logging
LogRecord Log object, each log is encapsulated as a log object
Hander Processor object responsible for controlling log output to stream/file
Formatter Formatter, responsible for formatting log records
RotatingFileHandler A log file logger that scrolls by size
TimedRotatingFileHandler A log file processor that scrolls by time

We focus on the main line of how to output logs to the standard window; Log configuration, log thread safety, and special handlers can be ignored for now.

Logging API design

Let’s start with logging:

import logging

logging.basicConfig(level=logging.INFO, format='%(levelname)-8s %(name)-10s %(asctime)s %(message)s')
lang = {"name": "python", "age":20}
logging.info('This is a info message %s', lang)
logging.debug('This is a debug message')
logging.warning('This is a warning message')

logger = logging.getLogger(__name__)
logger.warning('This is a warning')
Copy the code

The output is as follows:

INFO     root       2021-03-04 00:03:53,473 This is a info message {'name': 'python', 'age': 20}
WARNING  root       2021-03-04 00:03:53,473 This is a warning message
WARNING  __main__   2021-03-04 00:03:53,473 This is a warning
Copy the code

As you can see, logging is very easy to use and the module provides a set of apis directly.

Logger. Root = root logger. manager = manager (logger. root) def debug(MSG, *args, Handlers: len(root. Handlers) == 0: BasicConfig () # root.debug(MSG, *args, **kwargs) def getLogger(name=None): if name: Return logger.manager.getLogger (name) # Create a specific Logger else: return root # Return the default LoggerCopy the code

This is how the API is provided, as we saw in Requests. An important way to set config in the API is:

def basicConfig(**kwargs): ... if handlers is None: filename = kwargs.pop("filename", None) mode = kwargs.pop("filemode", 'a') if filename: h = FileHandler(filename, mode) else: Stream = kwargs.pop("stream", None) h = StreamHandler(stream) # Handlers = [h] DFS = kwargs.pop("datefmt", None) style = kwargs.pop("style", '%') fs = kwargs.pop("format", _STYLES[style][1]) fmt = Formatter(fs, dfs, Handlers: If h. ormatter is None: H.setformatter (FMT) root.addHandler(h) # Set handler level = kwargs.pop("level", None) if level is not None: Root.setlevel (level) # Set the log levelCopy the code

As you can see, log configuration mainly includes the following items:

  • Level Log level
  • Format Format of information formatting
  • Filename is output to a file
  • datefmt %Y-%m-%d %H:%M:%S,uuuTime format template
  • style [%.{,$] format template

In the demo code output, you can see that the debug log is not displayed because debug < info:

CRITICAL = 50
FATAL = CRITICAL
ERROR = 40
WARNING = 30
WARN = WARNING
INFO = 20
DEBUG = 10
NOTSET = 0
Copy the code

Logger object Logger

Before looking at Logger, look at the Logger object’s management class, Manager

_loggerClass = Logger class Manager(object): def __init__(self, rootnode): Self. root = rootNode self.disable = 0 self.loggerDict = {} # self.root = rootNode self.disable = 0 self.loggerDict = {} # Def getLogger(self, name): rv = None if name in self.loggerdict: rv = self.loggerdict [name] Else: rv = (self.loggerClass or _loggerClass)(name) # Create new Logger rv. Manager = self self.loggerdict [name] = rv... return rvCopy the code

Log filter

class Filterer(object): def __init__(self): self.filters = [] def addFilter(self, filter): self.filters.append(filter) def removeFilter(self, filter): self.filters.remove(filter) def filter(self, record): Rv = True for f in self.filters: # Filter log if hasattr(f, 'filter'): result = f.filter(record) else: result = f(record) # assume callable - will raise if not if not result: rv = False break return rCopy the code

The core Logger is really just a control center:

Class Logger(Filterer): def __init__(self, name, level=NOTSET): Filterer.__init__(self) self.name = name self.level = _checkLevel(level) self.parent = None # The log can have a hierarchy self.propagate = Self._cache = {} def debug(self, MSG, *args,); If self.isEnabledFor(debug): self._log(debug, MSG, args, **kwargs):Copy the code

Logger can determine the log level:

def isEnabledFor(self, level):
    if self.disabled:
        return False

    try:
        return self._cache[level]
    except KeyError:
        try:
            if self.manager.disable >= level:
                is_enabled = self._cache[level] = False
            else:
                is_enabled = self._cache[level] = (
                    level >= self.getEffectiveLevel()
                )
        return is_enabled

def getEffectiveLevel(self):
    logger = self
    while logger:
        if logger.level:
            return logger.level
        logger = logger.parent
    return NOTSET
Copy the code

Log output:

def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False, stacklevel=1): ... fn, lno, func = "(unknown file)", 0, "(unknown function)" ... Record = self.makeRecord(self.name, level, fn, lno, MSG, args, exc_info, func, extra) Self. handle(record)Copy the code

To produce a LogRecord, create a LogRecord object:

_logRecordFactory = LogRecord

def makeRecord(self, name, level, fn, lno, msg, args, exc_info,
               func=None, extra=None, sinfo=None):
    ...
    rv = _logRecordFactory(name, level, fn, lno, msg, args, exc_info, func,
                         sinfo)
    ...
    return rv
Copy the code

Use all the handlers for the Logger object to handle the log:

def handle(self, record): c = self found = 0 while c: for hdlr in c.handlers: Levelno >= hdlr.level: hdlr.handle(record) found = found + 1 if Record. levelno >= hdlr.level: hdlr.handle(record)Copy the code

Root-logger handler is configured in config:

def basicConfig(**kwargs): ... Root.addhandler (h) # set handler for rootCopy the code

Log object LogRecord

Logging objects are very simple:

class LogRecord(object): def __init__(self, name, level, pathname, lineno, msg, args, exc_info, func=None, sinfo=None, **kwargs): Ct = time.time() self.name = name # logger self. MSG = MSG # logger Self. args = args # self. levelName = getLevelName(level) Def getMessage(self): if self.args: if self.args = self.argsCopy the code

Processor object Hander

The top-level Handler defines the template methods of the Handler

Def __init__(self, level=NOTSET): Filterer.__init__(self) self._name = None self.level = _checkLevel(level) # Handler also has the log level self.formatter = None _addHandlerRef(self) self.createLock() def handle(self, record): rv = self. Finally: self.release() # release lock return rvCopy the code

The default console stream, StreamHandler

Class StreamHandler(Handler): terminator = '\n' # def __init__(self, stream=None): Handler.__init__(self) if stream is None: Self. Stream = stream def emit(self, record): try: MSG = self.format(record) # stream = self.stream. Write (MSG + self.terminator) # flush except Exception: ... def format(self, record): if self.formatter: fmt = self.formatter else: FMT = _defaultFormatter return FMT. Format (record) # Use formatter to format log recordsCopy the code

Why stderr is used, you can see that the output in the following tests is to console:

print("haha")
print("fatal error", file=sys.stderr)
sys.stderr.write("fatal error\n")
Copy the code

Formatter object Formatter

Formatters are mainly implemented using Formatter and Style

class Formatter(object): def __init__(self, fmt=None, datefmt=None, style='%', validate=True): self._style = _STYLES[style][0](fmt) self._fmt = self._style._fmt self.datefmt = datefmt def format(self, record): record.message = record.getMessage() s = self.formatMessage(record) return s def formatMessage(self, record): Return self._style.format(record) # formatCopy the code

Style class

class PercentStyle(object): default_format = '%(message)s' asctime_format = '%(asctime)s' asctime_search = '%(asctime)' validation_pattern = re.compile(r'%\(\w+\)[#0+ -]*(\*|\d+)?(\.(\*|\d+))?[diouxefgcrsa%]', re.I) def __init__(self, fmt): self._fmt = fmt or self.default_format def usesTime(self): return self._fmt.find(self.asctime_search) >= 0 def validate(self): """Validate the input format, ensure it matches the correct style""" if not self.validation_pattern.search(self._fmt): raise ValueError("Invalid format '%s' for '%s' style" % (self._fmt, self.default_format[0])) def _format(self, record): Return self._fmt % record.__dict__ # def format(self, record): try: return self._format(record) except KeyError as e: raise ValueError('Formatting field not found in record: %s' % e)Copy the code

Scroll log file handler

The continued output of online logs to a file can make the file so large that it increases the risk of loss and is difficult to handle. There is usually a way to scroll by size or by date, which is very important. Start with the scroll log handler template:

class BaseRotatingHandler(logging.FileHandler): def emit(self, record): try: if self.shouldRollover(record): Self.dorollover () # self.dorollover () # self.dorollover () # FileHandler. Emit (self, record) # self.handleError(record) def rotate(self, source, dest): if not callable(self.rotator): if os.path.exists(source): Os. rename(source, dest) # rename the log file else: self.rotator(source, dest)Copy the code

Scroll RotatingFileHandler by size

Processors that scroll by file size:

class RotatingFileHandler(BaseRotatingHandler): def __init__(self, filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=False): if maxBytes > 0: mode = 'a' BaseRotatingHandler.__init__(self, filename, mode, encoding, Self. maxBytes = maxBytes # backupCount = backupCount # def doRollover(self): Self. Stream = None if self. BackupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = self.rotation_filename("%s.%d" % (self.baseFilename, i)) dfn = self.rotation_filename("%s.%d" % (self.baseFilename, i + 1)) if os.path.exists(sfn): if os.path.exists(dfn): os.remove(dfn) os.rename(sfn, dfn) dfn = self.rotation_filename(self.baseFilename + ".1") if os.path.exists(dfn): Remove (DFN) self.basefilename, DFN) # Rename file if not self.delay: Def shouldRollover(self, record): self.stream = self._open() If self.stream = self._open() if self.maxbytes > 0: msg = "%s\n" % self.format(record) self.stream.seek(0, 2) #due to non-posix-compliant Windows feature if self.stream.tell() + len(msg) >= self.maxBytes: Return 1 return 0Copy the code

File size scrolling is to determine whether a document exceeds the upper limit when logging. If the upper limit is exceeded, the old log is renamed and a new log is generated.

Scroll TimedRotatingFileHandler by date

A processor that scrolls by date:

class TimedRotatingFileHandler(BaseRotatingHandler): def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=False, utc=False, atTime=None): BaseRotatingHandler.__init__(self, filename, 'a', encoding, Delay) self.when = when. Upper () self.backupCount = backupCount self.utc = UTC self.attime = atTime self.when == 'S': self.interval = 1 # one second self.suffix = "%Y-%m-%d_%H-%M-%S" self.extMatch = r"^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}(\.\w+)?$" ... self.extMatch = re.compile(self.extMatch, re.ASCII) self.interval = self.interval * interval # multiply by units requested filename = self.baseFilename if Os.path.exists (filename): t = os.stat(filename)[ST_MTIME] # else: T = int(time.time()) self.rolloverat = self.self.self (t) def computeRollover(self, currentTime): Pass def shouldRollover(self, record): t = int(time.time()) if t >= self. RolloverAt: Return 1 return 0 def doRollover(self):... dfn = self.rotation_filename(self.baseFilename + "." + time.strftime(self.suffix, TimeTuple)) # scroll the log file if os.path.exists(DFN): os.remove(dfn) self.rotate(self.baseFilename, dfn) if self.backupCount > 0: for s in self.getFilesToDelete(): os.remove(s) ... NewRolloverAt = self.computeRollover(currentTime) self.rolloverAt = newRolloverAtCopy the code

Date scrolling is the calculation of the last time point, after which a new log file is generated.

summary

The logging logic looks something like this:

  • Create a Logger object that provides an API for receiving application logs
  • A Logger object contains multiple handlers
  • Each Handler has a Formatter object
  • Each log generates a LogRecord object
  • Different Handler objects are used to submit LogRecored objects to different streams
  • Each log object is output by Formatter
  • You can scroll through log files by date/file size

tip

Override the object’s __reduce__ method to support the reduce function:

class RootLogger(Logger):
    def __init__(self, level):
        Logger.__init__(self, "root", level)

    def __reduce__(self):
        return getLogger, ()
Copy the code

Thread lock creation and release:

_lock = threading.RLock()

def _acquireLock():
    if _lock:
        _lock.acquire()

def _releaseLock():
    if _lock:
        _lock.release()
Copy the code

Use of thread locks:

def addHandler(self, hdlr):
    _acquireLock()
    try:
        self.handlers.append(hdlr)
    finally:
        _releaseLock()

def removeHandler(self, hdlr):
    _acquireLock()
    try:
        self.handlers.remove(hdlr)
    finally:
        _releaseLock()
Copy the code

This is the end of today’s content, I hope this article is helpful to you, like don’t forget to collect, forward, comment, little attention oh!