Keep track of things you use frequently, and it’s always a bit of a hassle to check the document every time you use it. Py logging should be an important lifeline for production applications, and no one should take it lightly

Principles of good

Level of separation

Logging systems usually have the following levels, depending on the situation

  • FATAL – FATAL system-level errors that cause program exits. When errors occur, the system administrator must intervene immediately and exercise caution.
  • ERROR – Runtime exceptions and unexpected errors that also need to be handled immediately, but with less urgency than FATAL. When errors occur, they affect the correct execution of the program. It is important to note that these two levels are service errors and require administrator intervention. User input errors do not belong to this category.
  • WARN – Unexpected runtime conditions that indicate a possible problem with the system. It is also WARN for conditions that are not currently errors, but could become errors if not handled promptly, such as low disks.
  • INFO – A meaningful event that records the normal running status of a program, such as a request received or successfully executed. You can quickly locate WARN, ERROR, and FATAL by viewing INFO. INFO should not be too much, usually no more than 10% of TRACE.
  • DEBUG – Details related to the process in which the program is running and the current state of variables.
  • TRACE – More detailed TRACE information. DEBUG and TRACE are defined by the project team. Logs can be used to view the execution process of each step of an operation and locate the operation, parameters, and sequence that cause an error

A separate directory

It is recommended that logs be stored in a separate log directory, such as /var/logs/and divided into different directories or files according to the application. Do not store logs in the application directory, which is not conducive to automatic deployment, application upgrade, and backup.

Classification of log

Diagnostic logs, statistical logs, audit logs, etc., for different purposes such as logs stored in different files, aspects behind the query, analysis.

Log format

Whether it’s web logs or application logs, it’s a good idea to have a consistent format (such as time format) for aspect log queries, archiving, and analysis. There are also applications that use a uniform JSON log format, which is fine.

Bad practice

  • The logs contain sensitive user information
  • Use print in an online program
  • The production environment uses debug level logs 😢

Log segmentation

Logs can be shred and compressed by day, week, or file size. On the one hand, it is easy to trace back in time. On the other hand, it can reduce disk space. For long-ago logs, they can be transferred to a remote server or deleted.

Python log

Good habits

  • Root level: log format to facilitate standardization
  • Set logger in classself.logger = logging.getLogger(type(self).__name__)
  • Module, set logger in the filelogger = logging.getLogger(__name__)
  • Using formats such as JSON YAML to configure logging feels like a better way to configure logging than using code or INI formats
  • Error logging is a special kind of log because it requires more information, such as the context in which the error occurred, as well as the error stack. Can be achieved bypython logging context pypiKeyword Google some information, or design your own logging handler to implement.

The actual problem

  • Simple applet, single log file, print console at the same time
import logging
logging.basicConfig(level=logging.DEBUG,
                    format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
                    datefmt='%m-%d %H:%M',
                    filename='/temp/myapp.log',
                    filemode='w')

        console = logging.StreamHandler()
        console.setLevel(logging.INFO)
        formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
        console.setFormatter(formatter)
        # add the handler to the root logger
        logging.getLogger('').addHandler(console)
        Copy the code
  • Log Exception trace (useful)
try:
    open('/path/to/does/not/exist', 'rexcept (SystemExit, KeyboardInterru    raexcept Exception    logger.error('Failed to open file', exc_info=TrCopy the code
  • Ini Format Example

Here with the third party a handler, ConcurrentRotatingFileHandler, implementation process more safe

[loggers]
keys=root
[handlers]
keys=stream, rotatingFile, errorFile
[formatters]
keys=form01
[logger_root]
level=DEBUG
handlers=stream, rotatingFile, errorFile
[handler_stream]
class=StreamHandler
level=NOTSET
formatter=form01
args=(sys.stdout,)
[handler_errorFile]
class=FileHandler
level=ERROR
formatter=form01
args=('./logs/portal.log', 'a')
[handler_rotatingFile]
level=INFO
formatter=form01
class=handlers.ConcurrentRotatingFileHandler
args=('./logs/portal.log','a',50240000, 10)
[formatter_form01]
format=%(asctime)s %(name)s %(levelname)s %(message)s
datefmt=
class=logging.Formatter
Copy the code

reference

import logging
import logging.config
import cloghandler
logging.config.fileConfig(join(BASE_DIR, "conf/log.conf"))
logger = logging.getLogger(__name__)
Copy the code

The logger root is used by default, and the corresponding logger is used if the names match. A logger can also specify more than one Handdler to handle different log levels, etc.

  • JSON format example

configuration

{
    "version": 1,
    "disable_existing_loggers": false,
    "formatters": {
        "simple": {
            "format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
        }
    },
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
            "level": "DEBUG",
            "formatter": "simple",
            "stream": "ext://sys.stdout"
        },
        "info_file_handler": {
            "class": "logging.handlers.RotatingFileHandler",
            "level": "INFO",
            "formatter": "simple",
            "filename": "info.log",
            "maxBytes": 10485760,
            "backupCount": 20,
            "encoding": "utf8"
        },
        "error_file_handler": {
            "class": "logging.handlers.RotatingFileHandler",
            "level": "ERROR",
            "formatter": "simple",
            "filename": "errors.log",
            "maxBytes": 10485760,
            "backupCount": 20,
            "encoding": "utf8"
        }
    },
    "loggers": {
        "my_module": {
            "level": "ERROR",
            "handlers": ["console"],
            "propagate": "no"
        }
    },
    "root": {
        "level": "INFO",
        "handlers": ["console", "info_file_handler", "error_file_handler"]
  Copy the code

Access to the configuration

import os
import json
import logging.config
def setup_logging(
    default_path='logging.json',
    default_level=logging.INFO,
    env_key='LOG_C    """Setup logging configura       path = default_    value = os.getenv(env_key, N    if va        path = v    if os.path.exists(pa        with open(path, 'rt') a            config = json.loa        logging.config.dictConfig(con    e        logging.basicConfig(level=default_leveCopy the code

  • A tool to format logs into JSON

Python-json-logger, logmatic jSON-Logger adds some packaging

import logging.handlers
from pythonjsonlogger import jsonlogger
import datetime
class JsonFormatter(jsonlogger.JsonFormatter, object):
    def __init__(self,
                 fmt="%(asctime) %(name) %(processName) %(filename)  %(funcName) %(levelname) %(lineno) %(module) %(threadName) %(message)",
                 datefmt="%Y-%m-%dT%H:%M:%SZ%z",
                 style='%',
                 extra={}, *args, **kwargs):
        self._extra = extra
        jsonlogger.JsonFormatter.__init__(self, fmt=fmt, datefmt=datefmt, *args, **kwargs)
    def process_log_record(self, log_record):
        # Enforce the presence of a timestamp
        if "asctime" in log_record:
            log_record["timestamp"] = log_record["asctime"]
        else:
            log_record["timestamp"] = datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S.%fZ%z")
        if self._extra is not None:
            for key, value in self._extra.items():
                log_record[key] = value
        return super(JsonFormatter, self).process_log_record(log_recorCopy the code

reference