preface

The Logging framework is Python’s built-in printing module. It is both familiar and unfamiliar to many Python developers. Indeed, it is simple to use

import logging

logging.debug("This is a debug log.")
logging.info("This is a info log.")
logging.warning("This is a warning log.")
logging.error("This is a error log.")
logging.critical("This is a critical log.")

# output
WARNING:root:This is a warning log.
ERROR:root:This is a error log.
CRITICAL:root:This is a critical log.
Copy the code

However, when we need to develop a Logging framework based on Logging in a real project, we often encounter various problems. Examples include performance issues such as loss of scrolling Logging under multiple processes, inefficient Logging, and failure to utilize the core mechanism of Logging due to a lack of understanding of the internal processes during secondary development.

Next, let’s dig into the Logging System in a single line of code

1. In-depth Logging module source code

1.1 logging.infoWhat did you do?

Logging.info, the first line of code that most people use in the logging module, hides so many implementation details that we initially focus on functionality and ignore its internal details

import logging
logging.info("This is a info log.")

# logging/__init__.py
def info(msg, *args, **kwargs) :
    """ Log a message with severity 'INFO' on the root logger. If the logger has no handlers, call basicConfig() to add a console handler with a pre-defined format. """
    Call the basicConfig method without binding handlers
    if len(root.handlers) == 0:
        basicConfig()
    root.info(msg, *args, **kwargs)

_lock = threading.RLock()

def _acquireLock() :
    """ Acquire the module-level lock for serializing access to shared data. This should be released with _releaseLock(). "" "
    if _lock:
        _lock.acquire()

def _releaseLock() :
    """ Release the module-level lock acquired by calling _acquireLock(). """
    if _lock:
        _lock.release()
        
def basicConfig(**kwargs) :
    Handy one-step configuration method to create a StreamHandler to print logs to the console This function does nothing if the root logger already has handlers configured. It is a convenience method intended for use by simple scripts to do one-shot configuration of the logging package. The default behaviour is to create a StreamHandler which writes to sys.stderr, set a formatter using the BASIC_FORMAT format string, and add the handler to the root logger. """
    # Add thread safety in case someone mistakenly calls
    # basicConfig() from multiple threads
    Locking is done to ensure a safe logging operation for multiple threads (as above, standard multithreaded locking)
    _acquireLock()
    try:
        if len(root.handlers) == 0:
            Create a FileHandler if filename is present and select StreamHandler if no filename is present
            handlers = kwargs.pop("handlers".None)
            if handlers is None:
                if "stream" in kwargs and "filename" in kwargs:
                    raise ValueError("'stream' and 'filename' should not be "
                                     "specified together")
            else:
                if "stream" in kwargs or "filename" in kwargs:
                    raise ValueError("'stream' or 'filename' should not be "
                                     "specified together with 'handlers'")
            if handlers is None:
                filename = kwargs.pop("filename".None)
                mode = kwargs.pop("filemode".'a')
                if filename:
                    h = FileHandler(filename, mode)
                else:
                    stream = kwargs.pop("stream".None)
                    h = StreamHandler(stream)
                handlers = [h]
            dfs = kwargs.pop("datefmt".None)
            style = kwargs.pop("style".The '%')
            if style not in _STYLES:
                raise ValueError('Style must be one of: %s' % ', '.join(
                                 _STYLES.keys()))
            fs = kwargs.pop("format", _STYLES[style][1])
            fmt = Formatter(fs, dfs, style)
            Bind handler to root
            for h in handlers:
                if h.formatter is None:
                    h.setFormatter(fmt)
                root.addHandler(h)
            level = kwargs.pop("level".None)
            if level is not None:
                root.setLevel(level)
            if kwargs:
                keys = ', '.join(kwargs.keys())
                raise ValueError('Unrecognised argument(s): %s' % keys)
    finally:
        _releaseLock()
Copy the code

So far, you can see that logging.info does not start printing until it initializes the handler by calling basicConfig(), which is done by an initializing handler-> binding root in a multithreaded lock state, So what does root stand for?

# logging/__init__.py

root = RootLogger(WARNING)
Logger.root = root
Logger.manager = Manager(Logger.root)

class RootLogger(Logger) :
    """ A root logger is not that different to any other logger, except that it must have a logging level and there is only one instance of it in the hierarchy. """
    def __init__(self, level) :
        """ Initialize the logger with the name "root". """
        # calling the parent class initialization, we passed root and WARNING parameters, so we can't output any information when we call logging.info directly, because logger level was initialized as WARNING
        Logger.__init__(self, "root", level)

    def __reduce__(self) :
        return getLogger, ()

class Logger(Filterer) :
    """ Instances of the Logger class represent a single logging channel. A "logging channel" indicates an area of an application. Exactly how an "area" is defined is up to the application developer. Since an application can have any number of areas, logging channels are identified by a unique string. Application areas can be nested (e.g. an area of "input processing" might include sub-areas "read CSV files", "read XLS files" and "read Gnumeric files"). To cater for this natural nesting, channel names are organized into a namespace hierarchy where levels are separated by periods, much like the Java or Python package namespace. So in the instance given above, channel names might be "input" for the upper level, and "input.csv", "input.xls" and "input.gnu" for the sub-levels. There is no arbitrary limit to the depth of nesting. """
    def __init__(self, name, level=NOTSET) :
        """ Initialize the logger with a name and an optional level. """
        Filterer.__init__(self)
        self.name = name # name = root
        self.level = _checkLevel(level) Logger level
        self.parent = None
        self.propagate = True
        self.handlers = []
        self.disabled = False
        self._cache = {}
        
    def info(self, msg, *args, **kwargs) :
        """ Log 'msg % args' with severity 'INFO'. To pass exception information, use the keyword argument exc_info with a true value, e.g. logger.info("Houston, we have a %s", "interesting problem", exc_info=1) """
        This check is to determine whether the logger level can be output
        if self.isEnabledFor(INFO):
            self._log(INFO, msg, args, **kwargs)
    
    def getEffectiveLevel(self) :
        """ Get the effective level for this logger. Loop through this logger and its parents in the logger hierarchy, looking for a non-zero logging level. Return the first one found. """
        Find the parent Logger level
        logger = self
        while logger:
            if logger.level:
                return logger.level
            logger = logger.parent
        return NOTSET
    
    def isEnabledFor(self, level) :
        """ Is this logger enabled for level 'level'? "" "
        try:
            return self._cache[level]
        except KeyError:
            # reoccurrence of lock operation
            _acquireLock()
            if self.manager.disable >= level:
                is_enabled = self._cache[level] = False
            else:
                is_enabled = self._cache[level] = level >= self.getEffectiveLevel()
            _releaseLock()

            return is_enabled
Copy the code

Root is an instance of RootLogger. RootLogger inherits from Logger and records logs uniformly through _log. Logger level is determined before _log

To get the first caller in the stack
# _srcfile is used when walking the stack to check when we've got the first
# caller stack frame, by skipping frames whose filename is that of this
# module's source. It therefore should contain the filename of this module's
# source file.
#
# Ordinarily we would use __file__ for this, but frozen modules don't always
# have __file__ set, for some reason (see Issue #21736). Thus, we get the
# filename from a handy code object from a function defined in this module.
# (There's no particular reason for picking addLevelName.)
#

_srcfile = os.path.normcase(addLevelName.__code__.co_filename)

# _srcfile is only used in conjunction with sys._getframe().
# To provide compatibility with older versions of Python, set _srcfile
# to None if _getframe() is not available; this value will prevent
# findCaller() from being called. You can also do this if you want to avoid
# the overhead of fetching caller information, even when _getframe() is
# available.
#if not hasattr(sys, '_getframe'):
# _srcfile = None

def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False) :
        """ Low-level logging routine which creates a LogRecord and then calls all the handlers of this logger to handle the record. """
        sinfo = None
        if _srcfile:
            #IronPython doesn't track Python frames, so findCaller raises an
            #exception on some versions of IronPython. We trap it here so that
            #IronPython can use logging.
            try:
                fn, lno, func, sinfo = self.findCaller(stack_info)
            except ValueError: # pragma: no cover
                fn, lno, func = "(unknown file)".0."(unknown function)"
        else: # pragma: no cover
            fn, lno, func = "(unknown file)".0."(unknown function)"
        if exc_info:
            if isinstance(exc_info, BaseException):
                exc_info = (type(exc_info), exc_info, exc_info.__traceback__)
            elif not isinstance(exc_info, tuple):
                exc_info = sys.exc_info()
         Generate a log record from makeRecord
        record = self.makeRecord(self.name, level, fn, lno, msg, args,
                                 exc_info, func, extra, sinfo)
        # processing
        self.handle(record)

def findCaller(self, stack_info=False) :
        Find the stack frame of the caller so that we can note the source file name, line number and function name. """
        f = currentframe()
        #On some versions of IronPython, currentframe() returns None if
        #IronPython isn't run with -X:Frames.
        if f is not None:
            f = f.f_back
        rv = "(unknown file)".0."(unknown function)".None
        while hasattr(f, "f_code"):
            co = f.f_code
            filename = os.path.normcase(co.co_filename)
            if filename == _srcfile:
                f = f.f_back
                continue
            sinfo = None
            if stack_info:
                sio = io.StringIO()
                sio.write('Stack (most recent call last):\n')
                traceback.print_stack(f, file=sio)
                sinfo = sio.getvalue()
                if sinfo[-1] = ='\n':
                    sinfo = sinfo[:-1]
                sio.close()
            rv = (co.co_filename, f.f_lineno, co.co_name, sinfo)
            break
        return rv
    
def makeRecord(self, name, level, fn, lno, msg, args, exc_info,
                   func=None, extra=None, sinfo=None) :
        """ A factory method which can be overridden in subclasses to create specialized LogRecords. """
        Create an instance of _logRecordFactory
        rv = _logRecordFactory(name, level, fn, lno, msg, args, exc_info, func,
                             sinfo)
        if extra is not None:
            for key in extra:
                if (key in ["message"."asctime"]) or (key in rv.__dict__):
                    raise KeyError("Attempt to overwrite %r in LogRecord" % key)
                rv.__dict__[key] = extra[key]
        return rv
    
def handle(self, record) :
        """ Call the handlers for the specified record. This method is used for unpickled records received from a socket, as well as those created locally. Logger-level filtering is applied. """
        if (not self.disabled) and self.filter(record):
            self.callHandlers(record)

def callHandlers(self, record) :
        """ Pass a record to all relevant handlers. Loop through all handlers for this logger and its parents in the logger hierarchy. If no handler was found, output a one-off error message to sys.stderr. Stop searching up the hierarchy whenever a logger with the "propagate" attribute set to zero is found - that will be the last logger whose handlers are called. """
        c = self
        found = 0
        Handlers are called to handle the record instance. This is similar to the import process. Sys. meta_path importer is called to handle path. We call baseConfig to initialize and bind FileHandler to root, then root.info, and finally get to this method, fileHandler. handle
        while c:
            for hdlr in c.handlers:
                found = found + 1
                if record.levelno >= hdlr.level:
                    hdlr.handle(record)
            if not c.propagate:
                c = None    #break out
            else:
                c = c.parent
        if (found == 0) :if lastResort:
                if record.levelno >= lastResort.level:
                    lastResort.handle(record)
            elif raiseExceptions and not self.manager.emittedNoHandlerWarning:
                sys.stderr.write("No handlers could be found for logger"
                                 " \"%s\"\n" % self.name)
                self.manager.emittedNoHandlerWarning = True
Copy the code

Let’s move on to FileHandler

class FileHandler(StreamHandler) :
    def _open(self) :
        """ Open the current base file with the (original) mode and encoding. Return the resulting stream. """
        return open(self.baseFilename, self.mode, encoding=self.encoding)
    
    def emit(self, record) :
        """ Emit a record. If the stream was not opened because 'delay' was specified in the constructor, open it before calling the superclass's emit. """
        Open file handle (file stream)
        if self.stream is None:
            self.stream = self._open(a)Call the emit method of StreamHandler
        StreamHandler.emit(self, record)
    
class StreamHandler(Handler) :
    def flush(self) :
        """ Flushes the stream. """
        # Brush in with lock again
        self.acquire()
        try:
            if self.stream and hasattr(self.stream, "flush"):
                self.stream.flush()
        finally:
            self.release()
            
    def emit(self, record) :
        """ Emit a record. If a formatter is specified, it is used to format the record. The record is then written to the stream with a trailing newline. If exception information is present, it is formatted using traceback.print_exception and appended to the stream. If the stream has an 'encoding' attribute, it is used to determine how to do the output to the stream. """
        try:
            msg = self.format(record)
            stream = self.stream
            # issue 35046: merged two stream.writes into one.
            Write to the stream buffer, execute flush
            stream.write(msg + self.terminator)
            self.flush()
        except RecursionError:  # See issue 36272
            raise
        except Exception:
            self.handleError(record)

class Handler(Filterer) :
    The handle method of FileHandler is derived from the grandparent Handler
    def filter(self, record) :
        """ Determine if a record is loggable by consulting all the filters. The default is to allow the record to be logged; any filter can veto this and the record is then dropped. Returns a zero value if a record is to be dropped, else non-zero. .. Versionchanged :: 3.2 Allow filters to be just callables. """
        rv = True
        for f in self.filters:
            if hasattr(f, 'filter'):
                result = f.filter(record)
            else:
                result = f(record) # assume callable - will raise if not
            if not result:
                rv = False
                break
        return rv
    
    def handle(self, record) :
        """ Conditionally emit the specified logging record. Emission depends on filters which may have been added to the handler. Wrap the actual emission of the record with acquisition/release of the I/O thread lock. Returns whether the filter passed the record for emission. """
        Filter the log before calling the emit method of the subclass override to perform the real write logic
        rv = self.filter(record)
        if rv:
            self.acquire()
            try:
                self.emit(record)
            finally:
                self.release()
        return rv
Copy the code

Now that we have traced the process behind logging.info, we combine the above code with the official Logging Flow to tease out the logging process framework

1.2 Logging Process framework

Source: Logging HOWTO

The source code is in Python’s own lib/logging/, and the main content is in __init__.py. Follow the flow chart and the source code we analyzed to understand the definitions of the following four components and log structures

  • Logger, core components, can be mounted severalHandlerAnd a number ofFilterTo define the namespace and logging level to respond to
  • Handler, you can mount oneFormatterAnd a number ofFilterDefines the log level and output mode to respond to
  • FilterThe filter, though, is responsible for the inputLogRecordMake a judgment and returnTrue/FalseTo decide what to mountLoggerHandlerWhether to process the current log, but can also be used as middleware, custom rules can be rewrittenLogRecordAnd pass it on to the nextFilter/Handler/Logger
  • FormatterThe final log formatting component
  • LogRecordThe structure of a single log

According to the flow chart, the main process is as follows

  1. After the log printing request is sent to Logger, first determine whether the current Logger needs to process this level, and directly discard those that do not (level 1 control).

  2. Generate a LogRecord, including the call source and other information packaged together, call Logger mounted Filter chain to process

  3. If any Filter returns False, the log is discarded

  4. Otherwise, pass it to Logger’s mounted Handler chain for processing.

  5. If the propagate property is enabled, i.e. propagate up, the current LogRecord is passed to the parent Logger for processing, starting directly from step 4 (without triggering the first level control)

Handler processes

  1. Given the currentHandler Whether to process this level or not, discard it directly (Level 2 control)
  2. Will receive theLogRecord In turn, callsHandlerThe mountFilterChain to deal with
  3. Similarly, callFormatter

2. Logging in multi-process scenarios

2.1 Logging Problem analysis in multi-process scenarios

The Logging module is thread-safe, and we have seen the use of threading.rlock () in many places in the previous code, especially when calling the info/error method to print the log, which ends up calling the handle method of the top-level parent class, i.e

# logging/__init__.py

def handle(self, record) :
        Filter the log before calling the emit method of the subclass override to perform the real write logic
        rv = self.filter(record)
        if rv:
            self.acquire()
            try:
                self.emit(record)
            finally:
                self.release()
        return rv
Copy the code

This ensures that only one thread can call handle at a time in a multithreaded environment

However, another scenario that may be overlooked is the problems that arise in a multi-process environment. When we deploy Python Web projects, we usually start in a multi-process manner, which can cause the following problems:

  • Log disorder: for example, two processes output separatelyxxxxandyyyyTwo logs, then you might get similar results in the filexxyxyxyyThe result
  • Log loss: Although read and write logs are usedO_APPENDMode, which guarantees file write consistency, but becausebufferData is written firstbufferAnd then triggerflushMechanism flush to disk),fwriteThe operation is not multi-process safe
  • Another case of log loss: useRotatingFileHandlerOr is itTimerRotatingFileHandlerWhen switching files, the process will get different file handles, causing new files to be created repeatedly and data to be written to old files

2.2 Multi-process Logging solution

In order to deal with the above possible situations, the following are several solutions:

2.2.1 concurrent-log-handlerModule (file lock mode)

I have seen this module in other articles, analyze its implementation principle, Github address,

# concurrent_log_handler/__init__.[y]

BaseRotatingHandler inherits from Logging
class ConcurrentRotatingFileHandler(BaseRotatingHandler) :
	Write logic
    def emit(self, record) :
        try:
            msg = self.format(record)
            Lock logic
            try:
                self._do_lock()
			   ShouldRollover, doRollover do file segmentation
                try:
                    if self.shouldRollover(record):
                        self.doRollover()
                except Exception as e:
                    self._console_log("Unable to do rollover: %s" % (e,), stack=True)
                    # Continue on anyway
                self.do_write(msg)
            finally:
                self._do_unlock()
        except (KeyboardInterrupt, SystemExit):
            raise
        except Exception:
            self.handleError(record)
            
     def _do_lock(self) :
        # check whether a lock has been created
        if self.is_locked:
            return   # already locked... recursive?
        Trigger file lock
        self._open_lockfile()
        if self.stream_lock:
            for i in range(10) :# noinspection PyBroadException
                try:
                    Call the portalocker lock method
                    lock(self.stream_lock, LOCK_EX)
                    self.is_locked = True
                    break
                except Exception:
                    continue
            else:
                raise RuntimeError("Cannot acquire lock after 10 attempts")
        else:
            self._console_log("No self.stream_lock to lock", stack=True)
	
    def _open_lockfile(self) :
        """
        改变文件权限
        """
        if self.stream_lock and not self.stream_lock.closed:
            self._console_log("Lockfile already open in this process")
            return
        lock_file = self.lockFilename
        self._console_log(
            "concurrent-log-handler %s opening %s" % (hash(self), lock_file), stack=False)
		
        with self._alter_umask():
            self.stream_lock = open(lock_file, "wb", buffering=0)
		
        self._do_chown_and_chmod(lock_file)
    
    def _do_unlock(self) :
        if self.stream_lock:
            if self.is_locked:
                try:
                    unlock(self.stream_lock)
                finally:
                    self.is_locked = False
                    self.stream_lock.close()
                    self.stream_lock = None
        else:
            self._console_log("No self.stream_lock to unlock", stack=True)
           
# portalocker/portalocker.py
def lock(file_: typing.IO, flags: constants.LockFlags) :
        if flags & constants.LockFlags.SHARED:
            ...
        else:
            if flags & constants.LockFlags.NON_BLOCKING:
                mode = msvcrt.LK_NBLCK
            else:
                mode = msvcrt.LK_LOCK

            # windows locks byte ranges, so make sure to lock from file start
            try:
                savepos = file_.tell()
                if savepos:
                    # [ ] test exclusive lock fails on seek here
                    # [ ] test if shared lock passes this point
                    file_.seek(0)
                    # [x] check if 0 param locks entire file (not documented in
                    # Python)
                    # [x] fails with "IOError: [Errno 13] Permission denied",
                    # but -1 seems to do the trick
                try:
                    msvcrt.locking(file_.fileno(), mode, lock_length)
                except IOError as exc_value:
                    # [ ] be more specific here
                    raise exceptions.LockException(
                        exceptions.LockException.LOCK_FAILED,
                        exc_value.strerror,
                        fh=file_)
                finally:
                    if savepos:
                        file_.seek(savepos)
            except IOError as exc_value:
                raise exceptions.LockException(
                    exceptions.LockException.LOCK_FAILED, exc_value.strerror,
                    fh=file_)
Locking (fd, mode, msvcrt.locking(fd, mode, nbytes) Lock part of a file based on file descriptor fd from the C runtime. Raises OSError on failure. The locked region  of the file extends from the current file position for nbytes bytes, and may continue beyond the end of the file. mode must be one of the LK_* constants listed below. Multiple regions in a file may be locked at the same time, but may not overlap. Adjacent regions are not merged; they must be unlocked individually. Raises an auditing event msvcrt.locking with arguments fd, mode, nbytes. """           
Copy the code

In the final analysis, the file lock mechanism is invoked when the EMIT method is triggered, which limits the concurrent calls of multiple processes to the sequential calls of single process, ensuring the accuracy of log writing. However, in terms of efficiency, frequent modification of file permissions, locking and lock preemption will cause low efficiency.

2.2.2 Copying the log sharding scenariodoRolloverMethod and reproductionFileHandlerclass

Of course, in addition to the above file locking method, we can also customize the TimeRotatingFileHandler class or FileHandler class to add simple multi-process locking logic, such as fcntl.flock

static PyObject *
fcntl_flock_impl(PyObject *module, int fd, int code)
/*[clinic end generated code: output=84059e2b37d2fc64 input=0bfc00f795953452]*/
{
    int ret;
    int async_err = 0;

    if (PySys_Audit("fcntl.flock"."ii", fd, code) < 0) {
        return NULL;
    }
Trigger the Linux flock command to lock
#ifdef HAVE_FLOCK
    do {
        Py_BEGIN_ALLOW_THREADS
        ret = flock(fd, code);
        Py_END_ALLOW_THREADS
    } while (ret == -1&& errno == EINTR && ! (async_err = PyErr_CheckSignals()));#else

#ifndef LOCK_SH
#define LOCK_SH 1 /* shared lock */
#define LOCK_EX 2 /* exclusive lock */
#define LOCK_NB 4 /* don't block when locking */
#define LOCK_UN 8 /* unlock */
#endif
    {
        struct flock l;
        if (code == LOCK_UN)
            l.l_type = F_UNLCK;
        else if (code & LOCK_SH)
            l.l_type = F_RDLCK;
        else if (code & LOCK_EX)
            l.l_type = F_WRLCK;
        else {
            PyErr_SetString(PyExc_ValueError,
                            "unrecognized flock argument");
            return NULL;
        }
        l.l_whence = l.l_start = l.l_len = 0;
        do {
            Py_BEGIN_ALLOW_THREADS
            ret = fcntl(fd, (code & LOCK_NB) ? F_SETLK : F_SETLKW, &l);
            Py_END_ALLOW_THREADS
        } while (ret == -1&& errno == EINTR && ! (async_err = PyErr_CheckSignals())); }#endif /* HAVE_FLOCK */
    if (ret < 0) {
        return! async_err ? PyErr_SetFromErrno(PyExc_OSError) : NULL; } Py_RETURN_NONE; }Copy the code

2.2.3 Master/WorkerLog Collection (Socket/QueueMode)

This approach is also the official way to promote, from the documentation

Although logging is thread-safe, and logging to a single file from multiple threads in a single process is supported, logging to a single file from multiple processes is not supported, because there is no standard way to serialize access to a single file across multiple processes in Python. If you need to log to a single file from multiple processes, one way of doing this is to have all the processes log to a SocketHandler, and have a separate process which implements a socket server which reads from the socket and logs to If you need to log multiple processes to a single file, one solution is to have all processes log to a SocketHandler. Then use a separate process that implements a socket server to read from the socket while logging to a file. (If you prefer, you can dedicate one thread in one of the existing processes to perform this function.) This section documents this approach in more detail and includes a working socket receiver which can be used as a starting point for you to adapt in your own applications.

Alternatively, you can use a Queue and a QueueHandler to send all logging events to one of the processes in your multi-process (You can also use Queue and QueueHandler to send all logging events to a single process in your multi-process application.) The following example script demonstrates how you can do this; in the example a separate listener process listens for events sent by other processes and logs them according to its own logging configuration. Although the example only demonstrates one way of doing it (for example, You may want to use a listener thread rather than a separate listener process — the implementation would be analogous) it does allow for completely different logging configurations for the listener and the other processes in your application, and can be used as the basis for code meeting your own specific requirements:

Look at the QueueHandler example

import logging
import logging.config
import logging.handlers
from multiprocessing import Process, Queue
import random
import threading
import time

def logger_thread(q) :
    """ separate logging thread """
    while True:
        record = q.get()
        if record is None:
            break
        Get logger in the record instance
        logger = logging.getLogger(record.name)
        Call logger's Handle
        logger.handle(record)


def worker_process(q) :
    Log writing process
    qh = logging.handlers.QueueHandler(q)
    root = logging.getLogger()
    root.setLevel(logging.DEBUG)
    Bind QueueHandler to logger
    root.addHandler(qh)
    levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
              logging.CRITICAL]
    loggers = ['foo'.'foo.bar'.'foo.bar.baz'.'spam'.'spam.ham'.'spam.ham.eggs']
    for i in range(100):
        lvl = random.choice(levels)
        logger = logging.getLogger(random.choice(loggers))
        logger.log(lvl, 'Message no. %d', i)

if __name__ == '__main__':
    q = Queue()
  	# omit a big push of configuration
    workers = []
    Multiple processes write logs to Queue
    for i in range(5):
        wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))
        workers.append(wp)
        wp.start()
    logging.config.dictConfig(d)
    The startup child thread is responsible for log collection writes
    lp = threading.Thread(target=logger_thread, args=(q,))
    lp.start()
    # At this point, the main process could do some useful work of its own
    # Once it's done that, it can wait for the workers to terminate...
    for wp in workers:
        wp.join()
    # And now tell the logging thread to finish up, too
    q.put(None)
    lp.join()
Copy the code

Look at the case of SocketHandler

# the sender

import logging, logging.handlers

rootLogger = logging.getLogger(' ')
rootLogger.setLevel(logging.DEBUG)
Specify the IP address and port number
socketHandler = logging.handlers.SocketHandler('localhost',
                    logging.handlers.DEFAULT_TCP_LOGGING_PORT)
# don't bother with a formatter, since a socket handler sends the event as
# an unformatted pickle
rootLogger.addHandler(socketHandler)

# Now, we can log to the root logger, or any other logger. First the root...
logging.info('Jackdaws love my big sphinx of quartz.')

# Now, define a couple of other loggers which might represent areas in your
# application:

logger1 = logging.getLogger('myapp.area1')
logger2 = logging.getLogger('myapp.area2')

logger1.debug('Quick zephyrs blow, vexing daft Jim.')
logger1.info('How quickly daft jumping zebras vex.')
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
logger2.error('The five boxing wizards jump quickly.')

# the receiving end

import pickle
import logging
import logging.handlers
import socketserver
import struct


class LogRecordStreamHandler(socketserver.StreamRequestHandler) :
    """Handler for a streaming logging request. This basically logs the record using whatever logging policy is configured locally. """

    def handle(self) :
        """ Handle multiple requests - each expected to be a 4-byte length, followed by the LogRecord in pickle format. Logs the record according to whatever policy is configured locally. """
        while True:
            # Process of receiving data
            chunk = self.connection.recv(4)
            if len(chunk) < 4:
                break
            slen = struct.unpack('>L', chunk)[0]
            chunk = self.connection.recv(slen)
            while len(chunk) < slen:
                chunk = chunk + self.connection.recv(slen - len(chunk))
            obj = self.unPickle(chunk)
            Create LodRecord instance, call handleLogRecord processing
            record = logging.makeLogRecord(obj)
            self.handleLogRecord(record)

    def unPickle(self, data) :
        return pickle.loads(data)

    def handleLogRecord(self, record) :
        # if a name is specified, we use the named logger rather than the one
        # implied by the record.
        if self.server.logname is not None:
            name = self.server.logname
        else:
            name = record.name
        logger = logging.getLogger(name)
        # N.B. EVERY record gets logged. This is because Logger.handle
        # is normally called AFTER logger-level filtering. If you want
        # to do filtering, do it at the client end to save wasting
        # cycles and network bandwidth!
        logger.handle(record)

class LogRecordSocketReceiver(socketserver.ThreadingTCPServer) :
    """ Simple TCP socket-based logging receiver suitable for testing. """

    allow_reuse_address = True

    def __init__(self, host='localhost',
                 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
                 handler=LogRecordStreamHandler) :
        socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
        self.abort = 0
        self.timeout = 1
        self.logname = None

    def serve_until_stopped(self) :
        import select
        abort = 0
        while not abort:
            The select method receives port data
            rd, wr, ex = select.select([self.socket.fileno()],
                                       [], [],
                                       self.timeout)
            if rd:
                Call the Handle method of LogRecordStreamHandler
                self.handle_request()
            abort = self.abort

def main() :
    logging.basicConfig(
        format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
    tcpserver = LogRecordSocketReceiver()
    print('About to start TCP server... ')
    tcpserver.serve_until_stopped()

if __name__ == '__main__':
    main()
Copy the code

The Master/Worker approach is the above two modes, the log writing efficiency is not affected by concurrency, ultimately depends on the writing thread. QueueHandlers and their extension QueueListener can also be used to handle slow writes and blocking of logs.