Learning logging module: how to create custom log level properly and how to create a child log like father

I am learning how to use the logging module.

Here the desired design:
Inside main.py file a customized logger is created, for instance:

import logging
import logging.handlers
from mypath.myprogram import program_run

def logging_setup():
    level_name = 'QUERY'
    level_value = 21
    logging.addLevelName(level_name, level_value)

    def log_query(self, msg, args, **kwargs):
        if self.isEnabledFor(level_value):
            self.log(level_value, msg, args, **kwargs)
    setattr(logging, level_name, level_value)
    setattr(logging.getLoggerClass(), level_name.lower(), log_query)

    class CustomFormatter(logging.Formatter):
        grey = "\x1b[38;21m"
        yellow = "\x1b[33;21m"
        red = "\x1b[31;21m"
        blue = "\x1b[34;21m"
        bold_red = "\x1b[31;1m"
        reset = "\x1b[0m"
        format_info = "[%(levelname)s - %(asctime)s]: "
        format_message = "%(message)s"

        FORMATS = {
            logging.DEBUG: grey + format_info + reset + format_message,
            logging.INFO: grey + format_info + reset + format_message,
            logging.QUERY: blue + format_info + reset + format_message,
            logging.WARNING: yellow + format_info + reset + format_message,
            logging.ERROR: red + format_info + reset + format_message,
            logging.CRITICAL: bold_red + format_info + reset + format_message

        def format(self, record):
            log_fmt = self.FORMATS.get(record.levelno)
            formatter = logging.Formatter(log_fmt, datefmt='%Y-%m-%d %H:%M:%S')
            return formatter.format(record)

    # Create custom formatter
    custom_formatter = CustomFormatter()

    # Configure the root logger named app
    custom_logger = logging.getLogger('app')

    # set up the console handler
    console_handler = logging.StreamHandler()

    # add handlers

if __name__ == '__main__':
    # set up logging
    # get logger
    logger = logging.getLogger('app')
    # test logger
    logger.info('information message')
    logger.error('error message')
    logger.query('query message') # this is printed
    # test the logger inside another file

Then inside the other file mypath/myprogram.py:

import logging
def program_run():
    logger = logging.getLogger('app')
    logger.info('information in another file')
    logger.error('error in another file')

Two questions:

  1. Is the custom LogLevel created properly? The documentation suggests not to create custom log levels and lacks (at least for me) what should be coded in order to properly add it. I created this snippet that works copying from the internet but I don’t understand it fully: I am confused between module, class and object attributes. For instance:
    setattr(logging, level_name.lower(), log_query) did not let my logger use the log function
    Instead, setattr(logging.getLoggerClass(), level_name.lower(), log_query) adding to the class works. How and what should I think when coding this?
  2. The example works on the logger 'app', how can I change it in order to use __name__ with hierchal namespaces? My goal is to customize the root/father logger and whatever is derived from the main logger file (e.g. the myprogram logger) is customized too.

The docs only say that

it is possibly a very bad idea to define custom levels if you are developing a library .

(Logging HOWTO — Python 3.12.0 documentation)

They don’t say that’s generally a bad idea. Sometimes you want/need a custom log level, for instance a TRACE level that gives finer grained debugging info. In your design I’m not sure what the purpose of the “QUERY” level is, but formally there is nothing wrong with it.

The docs also say that generally it’s not necessary to create special log levels. Since you could use the general logger.log function. But I think that’s just a matter of what is more convenient. If you have an app or a bunch of apps where you want your own kind of logger, it may be more ergonomic to write a little logger class or custom handler or formatter with built-in custom level.

You have

logging.addLevelName(level_name, level_value)

According to the docs/help that should be the other way around:

addLevelName(level, levelName)
Associate ‘levelName’ with ‘level’.

but it will not error if you turn it around… (but will then not work as expected when you later try to use the level name or the level) -

As to the setattr calls - setattr(logging.getLoggerClass(), "query", log_query) - consider this:

>>> def func(self, msg):
...     self.error(msg)
>>> setattr(logging.getLoggerClass(), "query", func)
>>> logger = logging.getLogger()
>>> logging.basicConfig()
>>> logger.query("hi")

So, the class object now has a new function (sorry for the hackiness here, but Python allows all this) and any instance of that class can now use that. (What if the logger class already had a function with that name - well, then you just obliterated that old function … So if you plan on making this code more generally useful, then it might be better to define a custom logger class with the log_query function.)

The other setattr(logging, level_name.lower(), log_query) injects the log_query function with its name (“query”) into the logging module. This means you’d now be able to do sth like logging.query(logger, "message") which is probably not what you wanted/expected.

Whenever I’m unclear about Python details, the first thing I always do is just to try out tiny snippets of code in the interpreter and then inspect the objects, using dir and type. After the manipulation above for instance, you can see that the logging class has a “query” in its internal dictionary, and that its type is a function.

1 Like

For your second question - I think if you just use

custom_logger = logging.getLogger()  # instead of getLogger('app')

in your logging_setup then you’d be setting up the root logger there, and that setup would then automatically also be inherited by other, named loggers.

1 Like

Thank you, your insights worked. I’m working at the moment at creating my own custom logger class since I think it is the best way to support my own log levels.

I have a question though, but it might concerns how Python code should be designed at this point. Here my question: if I wish to create my logger class, that derives from logging.Logger by adding some class/instance variables (I shall use only one instance, as it derives from logging.Logger, I expected it to be a singleton), for instance a dict with the new log levels. How can I also share this information, i.e. the dict, to a custom formatter, for instance? That formatter would be added to the handlers of my custom class. Should I code the custom formatter class inside my custom logger class?

Hi @lmaurelli - I cannot really tell you what is “best” or what you “should” do. I do think your question is an excellent question about code design - and it kind of answers itself - I mean, if you are able to put this question to yourself, you will be able answer it :slight_smile:
There is a reason why the Formatter class and Logger classes are separate in the logging module - but this is “just” one particular design. I kind of like that design, since it focusses on composing objects and classes, rather than entangling them in complex class hierarchies, so it enables more modularity. If I was building a custom logger class, I would stick to a similar high-level design, so I would personally also make my custom logger and formatter separate classes. But this may lead to more coding and questions like: Ok, should my custom formatter also be usable with a general logger?
So, the questions are: What kind of design do you want? And how much work do you want to put into it?

1 Like

I’m replying now since I had some time to work on my custom logger class. Since I am also learning Python I picked the composing classes design pattern since it is more modular and clean in my opinion.

Do you mind discussing with me what would be the best approach/design to create the custom logger once defined?
I wanted to use the setLoggerClass so that the logging module takes care of managing everything for me but I saw that it does not support the passing of custom arguments, i.e. the extra here. How would I re-design my custom logger if some behaviour like setting its level or verbosity depends on the extra arguments given from the app?

        # Manually create an instance of CustomLogger with extra
        # NOTE: use directly CustomLogger class instead of setting it with setLoggerClass
        logger = CustomLogger("app", extra=extra)
        logger.setLevel(logging.DEBUG) # example to overwrite the init level from extra

        # Optionally, if you want this logger to be retrievable via logging.getLogger("app")
        # you can manually add it to the logging manager's dictionary
        logging.Logger.manager.loggerDict["app"] = logger

        logger = logging.getLogger("app")

Hi - First of all - I really like your approach. Personally, I think this is the best way to deal with these kind of designs: build on top of what the standard library (or some other mature lib) already provides. (I also like composition over inheritance.) Even when this makes the initial learning threshold a bit higher, it will surely pay off later.

In this case, I think the question you have to ask yourself is: Why did the standard lib not allow extra arguments in setLoggerClass? I don’t have a ready-made answer for that, and also don’t think there are deep reasons for this - but one obvious reason may be to keep things as simple as possible. If you dig in the docs (or in the logger module source code) you might find more specific reasons.

So, then the question is: Do you really need the extra? You may want extra flexibility when instantiating the class, but couldn’t that also be supported with a different implementation? For instance, why not just add a special function to your CustomLogger class, to set those extra features? Similar to logger.setLevel you could just add your own method, setSpecialFeature . If the log-level depends on those special features, then you’d also have to make sure to provide reasonable defaults (in case sb does not call that function).

A special consideration in this case is whether or not you want/need to change the log level too if someone sets the extra features (and if log level normally depends on those). It’s not really clear whether it is more consistent to do this or not.

Another thing to consider is that if you make log level (in some cases, all cases?) depend on some other attribute, you’re introducing a bit of extra complexity and friction in the design. That could be fine, but for a user of the library it might be unexpected (the setter functions on the logger may now be calling other setters under the cover).

1 Like

Thank you Hans for the reply. To date, I am not planning on using this logger in a code shared as library. I use it only in my application as a god-tier class that has customize behaviour. My wish is to be able to setup the logger once so that every time I need the logger, I simply call the logging.getLogger(name) or logging.getLogger(name.subname) to retrieve the instance of my logger (which should be a singleton AFAIK reading from the docs here and there).

Reading online, I ended up with the following design:

A custom logger class to add custom levels over the base logging.Logger class as follows:

import logging
import platform
import sys
from enum import Enum
from colorama import Fore, Style

class CustomLogger(logging.Logger):   
    def __init__(
        name: str,
        level: int = logging.NOTSET,
    ) -> None:
        super().__init__(name, level)
        # Add custom log levels

    def _addCustomLevels(self) -> None:
        # Helper function to create a logging method for a specific log level
        def make_log_method(level: CustomLevels):
            def log_method(self, message, *args, **kwargs):
                # NOTE: optimize by checking level before logging
                # https://docs.python.org/3/howto/logging.html#optimization
                if self.isEnabledFor(level.value):
                    self._log(level.value, message, args, **kwargs)
            return log_method

        # Register custom log levels and methods
        for level in CustomLevels:
            logging.addLevelName(level.value, level.name)
            setattr(CustomLogger, level.name.lower(), make_log_method(level))
            # not recommended (?): add the custom level to the logging namespace as well
            # setattr(logging, level.name, level.value)

# base levels should be accessed within the logging namespace
# e.g. with logging.INFO, logging.DEBUG, etc.
# custom levels should be accessed withing the following Enum class
# e.g. with CustomLevels.QUERY, CustomLevels.PROFILE, etc.
class CustomLevels(Enum):
    PROFILE = 19
    QUERY = 21
    EMAIL = 31

While the rest of the design is just a “customization” of the logger that can be accomplished by adding the desired behaviour trought formatters/handlers with a custom logic as follows:

def setup_logging(name: str | None = None, extra: dict | None = None):
    # NOTE: this is important
    # set the custom class as the default class managed by logging

    # NOTE: now use the logging APIs to configure the logger instance

    # get the logger
    logger = logging.getLogger(name)
    # NOTE: this is needed because I don't know how to disable double records tracing from the root and my custom logger (I guess) or to make mine as the root
    logger.propagate = False

    # set the logger level
    level = logging.DEBUG if debug_boolean else logging.INFO

    # Set up the console handler
    setConsoleHandler(logger, datefmt=mydatefmt, verbose=myverbose)

    # Set up SMTP handler for emails, if required
    setCustomSMTPHandler(logger, mysmtpinfo)

    # Set up file handler for logging to file, if required
    filename = setFileConfigs(mycustomargs)
    setFileHandler(logger, filename, verbose, datefmt)

    # Optmize logging if not verbose

    return logger

For instance, the custom console handler is setup as follows:

def setConsoleHandler(logger: CustomLogger, datefmt: str | None = None, verbose: bool = False) -> None:
    # Set the console handler with the custom formatter
    console_handler = logging.StreamHandler(sys.stdout)
    console_formatter = CustomColoredFormatter(datefmt, verbose)
    # NOTE: add custom colored formatter only the console handler
class CustomFormatter(logging.Formatter):
    def __init__(
            fmt: str | None = None,
            datefmt: str | None = None,
            verbose: bool = False,
        self.verbose = verbose

        # NOTE: colored console output support on Windows
        if platform.system() == "Windows":
            # https://github.com/tartley/colorama
            # On Windows, enable ANSI color codes
            from colorama import just_fix_windows_console
            # from colorama import init
            # init()

        # Setup the format info based on verbosity
        if self.verbose:
            # the syntax "-8" add space char to levelname if levelname length is less than 7
            self.format_info = "[%(name)s - %(levelname)-8s%(asctime)s (P:%(process)d - T:%(thread)d)]: "
            self.format_info = "[%(levelname)-8s%(asctime)s]: "

        self.format_message = "%(message)s"
        fmt = self.format_info + self.format_message

        # Call the base class constructor with the format and date format
        # NOTE: use the default format style "%" of the constructor
        super().__init__(fmt=fmt, datefmt=datefmt)

class CustomColoredFormatter(CustomFormatter):
    # NOTE: the console formatter subclasses the file formatter and add colored logs
    def __init__(
            datefmt: str | None = None,
            verbose: bool = False
        self._datefmt = datefmt
        # NOTE: colored console output support on Windows
        if platform.system() == "Windows":
            # https://github.com/tartley/colorama
            # On Windows, enable ANSI color codes
            from colorama import just_fix_windows_console

        # Call the base class constructor with the format and date format
        # NOTE: use the default format style "%" of the constructor
        super().__init__(datefmt=datefmt, verbose=verbose)

    def _make_message(self, ansi_code: str) -> str:
        """Takes the log string and formats it with color

            ansi_code (str): ANSI foreground colored code

            str: the formatted colored text
        return ansi_code + self.format_info + Style.RESET_ALL + self.format_message

    # mappings between log levels and ANSI colored codes
    FORMATS = {
        logging.DEBUG: Fore.MAGENTA,
        CustomLevels.PROFILE.value: Fore.GREEN,
        logging.INFO: Fore.WHITE,
        CustomLevels.QUERY.value: Fore.CYAN,
        logging.WARNING: Fore.YELLOW,
        CustomLevels.EMAIL.value: Fore.YELLOW,
        logging.ERROR: Fore.RED,
        logging.CRITICAL: Fore.RED,

    def format(self, record: logging.LogRecord) -> str:
        color = self.FORMATS.get(record.levelno, Style.RESET_ALL)
        log_fmt = self._make_message(color)
        formatter = logging.Formatter(log_fmt, self._datefmt)
        return formatter.format(record)

This worked nicely until I found out that to date I am not able to have a properly working logger by retrieving the logger in another file, for instance in other.py:

import logging
logger = logging.getLogger("app")

def other():
    logger.profile('This is a profile message from other.') # does not work, i.e. logger has no "profile" method

And the logger is setup in main.py as follows:

import logging
from mypath import setup_logging
import other

if __name__ == "__main__":
    logger = setup_logging(name="app", extra=myextra)
    logger.profile('This is a profile message.') # works

I guess the solution is trivial but I am having an hard time understanding the missing piece. I suspect that somehow the configuration is not saved in the inner manager when saving the class as the default logger.

Ok, I don’t know directly why you’d run into this issue. (I’ve worked with my own custom formatters, but never implemented a custom logger class.) To get to the root of this it would be probably be fastest to debug it in the most stripped down version (with a CustomLogger class that for instance only sets the loglevel to DEBUG as hard-coded default and uses no other overrides), and then using a few other stripped down scripts to test/debug this.

I wonder if it is related to this comment in the docs:

This function is typically called before any loggers are instantiated by applications which need to use custom logger behavior.

So, if the other files are not called from your main.py, or if they were imported before main.py, then the custom logger might not yet have been set up?

1 Like

I think you are right: after moving the getLogger inside other() the logger of the other module works as expected. I guess code written outside of function, i.e. at the same level as import statements, is run when importing the modules, before the __main__ entrypoint where the setup is performend. In debug I saw the change from the base class to my custom class. Thank you indeed, I should read some basic Python programming course I guess :face_in_clouds:

1 Like