The worst thing a developer can do is figure out why an unfamiliar app isn’t working. Sometimes, you don’t even know if the system is working as originally designed.

Applications that run online are black boxes that need to be tracked. The simplest and most important way to do this is to log. Logging allows us to develop software while the program emits information while the system is running that is useful to us and to the system administrator.

Just as we document code for future programmers, we should have new software generate enough logs for the system’s developers and administrators to use. Logs are a critical part of the system file about the health of your application. When logging software, document it the same way you document developers and administrators who will maintain the system in the future.

Some purists argue that a trained developer using logging and testing hardly needs an interactive debugger. If we can’t explain the application during development with detailed logs, it becomes even more difficult to interpret the code when it runs online.

This article introduces the Logging module in Python, including its design and applicable methods for more complex cases. This article is not intended as a documentation for developers, but rather as a guide to how Python’s logging templates are constructed and to inspire anyone interested in further research.

Why logging module?

Some developers might ask, why not just a simple print statement? The Logging module has many advantages, including:

  1. Multithreading support
  2. Log classification by different levels
  3. Flexibility and configurability
  4. How is logging separated from what

Finally, the real separation of what we record from how we record ensures that different parts of the software work together. For example, it allows the developer of a framework or library to add logs and let the system administrator or the person responsible for running the configuration decide what should be logged later.

What is in the Logging module

The Logging module perfectly separates the responsibilities of each part of it (following the approach of the Apache Log4j API). Let’s take a look at how a log line passes through the module’s code and examine the different parts of it.

Logger

Loggers are objects that developers often interact with. The main apis describe what we want to document.

In the case of loggers, we can send messages by category without worrying about how or where they came from.

For example, when we write logger.info(” Stock was sold at %s “, price) we have the following module in mind:

We need a line. Let’s say that some code is running in the logger so that this line appears in the console or file. But what actually happens on the inside?

logging

Logging is the package that the Logging module uses to satisfy all required information. They contain information about where to log, changing strings, parameters, requested information queues, and so on.

They are all recorded objects. These objects are generated every time we call the logger. But how do these objects get serialized into the stream? Through the processor!

The processor

The processor sends log records to other output terminals, who retrieve them and process them in related functions.

For example, a file processor will take a log record and add it to the file.

The standard logging module already has a variety of built-in processors, such as:

Multiple file processors (TimeRotated, SizeRotated, Watched), which can be written to files

  1. StreamHandler outputs target streams such as STdout or stderr
  2. The SMTPHandler sends logs by email
  3. SocketHandler sends log files to the stream socket
  4. SyslogHandler, NTEventHandler, HTTPHandler, and MemoryHandler

Now we have a model that looks like the real thing:

Most processors deal with strings (SMTPHandler, FileHandler, etc.). You might be wondering how these structured log entries are transformed into bytes that are easy to serialize.

formatter

Formatters are responsible for converting rich metadata logging to strings, and there will be a default formatter if nothing is provided.

The generic formatter classes are provided by the Logging library and take templates and styles as input. Placeholders can then declare all properties in a LogRecord object.

For example: ‘%(asctime)s %(LevelName)s %(name)s: %(message)s’ Hello EuroPython.

Note that the attribute information is the result of interpolating the original template of the log with the parameters provided. (For example, for logger.info(” Hello %s “, “Laszlo”) the message will be “Hello Laszlo”)

All default properties can be found in the log document.

Ok, now that we know about formatters, our model has changed again:

The filter

The final object of our logging tool is the filter.

Filters allow fine-grained control over the log records that should be sent. Multiple filters can be applied to both the logger and the processor. For a sent log, all filters should pass through this record.

Users can declare their own filters as objects, use the filter method to get log records as input, and feedback True/False as output.

With that in mind, here is the current logging workflow:

Recorder hierarchy

At this point, you might be impressed by the amount of complexity and cleverly hidden module configuration, but there’s something more to consider: logger layering.

We can create a logger with logging.getLogger(). This character passes a parameter to getLogger that defines a hierarchy by separating elements with dots.

For example, logging.getLogger(” parent.child “) will create a “child” logger whose parent is called “parent.” Loggers are global objects managed by the logging module. So we can easily retrieve them from anywhere in the project.

The logger example is also usually considered a channel. Hierarchies allow developers to define channels and their hierarchies.

As logging is passed to processors within all loggers, the parent processor will recursively process until we reach the top-level logger (defined as an empty string) or one of the loggers is set to Propagate = False. We can see from the updated figure:

Note that the parent logger is not called, only its handler. This means that filters and other code in the logger class will not be executed at the parent level. This is usually a trap when we add filters to loggers.

Workflow summary

We’ve explained the division of responsibilities and how we can fine-tune log filtering. There are, however, two other attributes we haven’t mentioned:

  1. The recorder can be mutilated, thus not allowing any record to be sent from it.
  2. A valid hierarchy can be set in both the logger and the handler.

For example, when a logger is set to the INFO level, only INFO levels and above are passed, and the same rule applies to processors.

With all these considerations in mind, the final logging flowchart looks like this:

How do I use the logging module

Now that we have seen the parts and design of the Logging module, it is time to understand how a developer interacts with it. Here is a code example:

It creates a logger with module __ name __. It creates channels and levels based on the project structure, just as Pyhon modules connect dots.

The logger variable refers to the logger’s “module,” with “projectA” as the parent and “root” as the parent.

In line 5, we see how to make the call to send the log. We can log at the appropriate level using one of the debug, info, error, or critical methods.

When logging a message, in addition to template parameters, we can pass password parameters with special meanings, the most interesting being exc_info and stack_info. They add information about the current exception and stack frame, respectively. For convenience, there is a method exception in the logger object, as this error calls exc_info=True.

These are the basics of how to use the logger module, but some practices that are generally considered bad are also worth explaining.

Overformatting strings

You should avoid using loggger.info(” string template {} “.format(argument)) and use logger.info(” String template %s “, argument) if possible. This is a better practice because the string only really changes when the log is sent. When we record at a level above INFO, not doing so will result in wasted cycles because the change will still occur.

Catch and format exceptions

In general, we want to log information about fetching module exceptions. It is intuitive to write this:

But this code would show us a log line like Something Bad Happened: “secret_key.” Which isn’t very useful. If we used exc_info as a preamble, it would look like this:

This will contain not only the exact resource of the exception, but also its type.

Setup logger

Equipping our software was simple, we needed to set up the log stack and specify how the records were sent.

Here are several ways to set up the log stack

infrastructure

This is by far the easiest way to set up logging. Use logging.basicConfig(level= “INFO”) to build a basic StreamHandler that logs everything on the INFO and up to the console level. Here are some parameters for writing the base Settings:

* * * * parameter * * * * For example, * * * *
filename
Specify the file handler to be created, using a specific file name instead of a stream handler /var/logs/logs.txt
format
Use a specially formatted string for the processor “‘ % % (asctime) s (message) ‘s”
datefmt
Use a specific date/time format “M H % : % : % S”
level
Sets a specific level for the root logger level “INFO”

This is an easy and useful way to set up a simple script.

Note that basicConfig can only be called this way at the start of the run. If you have already set up your root logger, calling basicConfig will not work.

The dictionary is set

The Settings of all the elements and how to connect them can be explained as dictionaries. The dictionary should consist of different parts, including loggers, handlers, formatting, and some basic generic parameters.

Examples are as follows:

DictConfig will disable all running loggers when referenced unless DISABLE_EXISTing_loGGERS is set to false. This is usually required because many modules declare a global logger that will be instantiated when imported before dictConfig is called.

You can view schema that can be used for the dictConfig method (link). Typically, these Settings will be stored in a YAML file and set from there. Many developers will prefer this approach to fileConfig (links) because it provides better support for customization.

Expand the logging

Thanks to this design, extending the logging module is easy. Let’s look at some examples:

Logging JSON | record JSON

Whenever we want to log, we can log JSON by creating a custom format that converts logging to jSON-encoded strings.

Adding more Context

In formatting, we can specify any logging attribute.

We can add attributes in a number of ways, in this case using filters to enrich logging.

This effectively adds a property to all logging, which passes through the logger. Formatting includes this attribute in the log line.

Please note that this affects all logging in your application, including libraries and other frameworks that you may use and that you send logs to. It can be used to record something like a single request ID in all log lines, to track requests or to add additional context information.

Starting with Python 3.2, you can use setLogRecordFactory to get all log creation records and add additional information. The Extra Attribute and LoggerAdapter class may also be interesting.

The log buffer

Sometimes we want to troubleshoot logs when errors occur. One way to do this is to create a buffered processor to keep track of the latest failure information when an error occurs. The following code is an example of an unintentional scheme:

For more information

The purpose of this introduction to the flexibility and configurability of the logging library is to demonstrate how it designs the aesthetics of separate concerns. It also provides a solid base for anyone interested in logging and how-to guide. Although this article is not a comprehensive overview of the Python logging module, there are some answers to common questions.

Q: My library sends a warning “No Logger configured”

A: Check out how to Configure Logging in a library from The Hitchhiker’s Guide to Python

Q: What happens if a logger has no hierarchy?

A: The valid hierarchy of a logger is defined recursively by its parent.

Q: All my logs are in local time, how do I record them in UTC?

A: Formatting is the answer! You need to set the Converter property to the universal UTC time in your formatting. Use Converter = time.gmtime.

This article reprinted text, copyright belongs to the author, such as infringement contact xiaobian delete!

The original address: www.tuicool.com/articles/q2…

For those who need source code or want to know more [(click here)](http://jq.qq.com/?_wv=1027&k…)