Shared Library and Main Script Logging to a Same Destinations

Edit 2.1: I have something of a solution in Edit 1 and Edit 2, but I’m not 100% sure it’s appropriate or efficient. I’d still appreciate feedback.

*****************

I have what I believe is referred to as a library - a folder of my own classes & functions. I drop the folder into my scripts and import what I need.

I want to formalize my logging via the built-in logging module. I’ve been using it but my solution was from my early days of understanding python and was cobbled together from whatever I could find online while I was new. The name of the logger is the calling script’s basename and passed as a parameter to things called from the library, typically when instantiating a class. I’ve also used os.path.splitext(os.path.basename(inspect.stack()[1].filename))[0]but I think that stops working consistently if one library file imports another due to the changing depth of the stack?

This also seems inefficient and janky, so here I am asking.

I want to be able to:

  • Generate a separate text log file for each main script that uses the library
  • Generate a separate text log file for each day - I plan to use a TimedRotatingFileHandler and when=‘midnight’
  • Log from the library to the main/calling script’s log file and not a library-specific log file
  • Determine each calling script’s log directory, among other things, via an INI, JSON, or YAML file - the script will be packaged as an EXE, run on Windows via Task Scheduler, and will need to be configurable via this config file
  • Eventually I’d like to generate log data in GELF format for a graylog server, if we decide to move forward with using one

I’ve seen over and over again in my searches the following comments:

  1. Do not use the root log, for security reasons
  2. You should really use logging.get_logger(name)

I found answers referencing the “Logging Cookbook: Using logging in multiple modules” which of course breaks #2 and names the loggers statically. That sounds like a bad idea, since:

  • I’d need to modify each logger for each file in the library every time it was used by a new main script
  • I’d need to maintain a separate copy of the library for every main script.

I don’t expect to share the library. Maybe eventually I would with one or two other people, so I didn’t try to package the library. I also have no experience packaging.

Any help or direction would be really appreciated. Examples are super helpful as I am mostly self taught and only occasionally dabble in Python. While I may be familiar with a concept, I might not recognize it or know it’s formal or “pythonic” name. I’ve done a lot of searching but I always worry my word choice works against me and some results read like jargon and I can’t parse it. I also don’t touch LLMs for a number of reasons.

*****************

Edit 1: I have found a possible solution after just rapid trial and error. I’d appreciate any thoughts on how good of a solution it is.

Using dictConfig and a YAML file, I am able to specify non-root logger names and assign them handlers, including file handlers that can all point to the same file.

/log_config.yaml:

version: 1
disable_existing_loggers: False
formatters:
  simple_format:
    format: '%(asctime)s | %(name)s | %(levelname)s | %(message)s'
    datefmt: '%Y%m%d %H:%M:%S'

# Handlers
handlers:
  console:
    class: logging.StreamHandler
    formatter: simple_format
    level: DEBUG
  file_handler:
    class: logging.FileHandler
    filename: 'custom/log/path.log'
    formatter: simple_format
    level: ERROR

# Loggers
loggers:
  __main__:
    handlers:
      - console
      - file_handler
    level: DEBUG
    propagate: False
  library:
    handlers:
      - console
      - file_handler
    level: DEBUG
    propagate: False

/main.py

import logging
import logging.config
import yaml
from library import my_math

with open("log_config.yaml", "r") as f:
    config = yaml.safe_load(f.read())
    logging.config.dictConfig(config)

def main():
    logger = logging.getLogger(__name__)
    logger.info("Script Starting.")
    my_math.divide(10,5)
    my_math.divide(5,0)
    logger.info("Script Finished.")
    logger.error(f"Test Error from {__name__}")

if __name__ == "__main__":
    main()

/library/my_math.py

import logging

logger = logging.getLogger(__name__)

def divide(x: int, y: int) -> int:
    try:
        quotient = x / y
    except ZeroDivisionError:
        logger.exception("Divide by Zero Error.")
        quotient = 0
    else:
        logger.debug(f"{x} / {y} = {quotient}")
    return quotient

I get log output from both main and anything in /library:

20260316 16:53:15 | library.my_math | ERROR | Divide by Zero Error.
Traceback (most recent call last):
  File "C:\scripts\library_example\library\my_math.py", line 7, in divide
    quotient = x / y
               ~~^~~
ZeroDivisionError: division by zero
20260316 16:53:15 | __main__ | ERROR | Test Error from __main__

Any concerns with this solution?

*****************

Edit 2: I found the YAML + dictConfig solution earlier but it was presented as a way to control module logging and used the root logger to combine them, which was an incomplete solution and I was trying to avoid the root logger. I didn’t consider/realize I could also just name and give configurations for __main__ and configure all of logging with it without relying on the root log. I had a few minutes and rapidly tried different things in the YAML/dictionary to figure out how it worked because I couldn’t find any examples of how to control logging for a more complex folder hierarchy. I just kept going past quitting time and found my solution. I was in a rush to not waste everyone’s time if I’d found a solution so Edit 1 was my attempt to avoid that.

After some thinking and what I learned yesterday from experimenting, it would be easy enough to expand the YAML file and keep the logging config as a section of that file. In this manner, I could set other script configurations in new sections and use the YAML file’s logging section to set the log path. For some reason that logic was alluding me yesterday so I was trying to figure out how to charge the log path outside of the YAML file.

I’d still like feedback on flaws of my solution from people more experienced than me.

That is exactly how to do it, yes!

In general, when using the logging module in a library, avoid configuring any log handlers in the library itself. Instead always configure the handlers in the application or script that uses the library.

Often what I see for public packages, is documentation of what logger names it uses, and suggestions for how to enable or disable specific outputs that might be useful. Some applications allow external log configuration using dictconfig too, following similar principles. The script will include a default set of log handlers but support reading a custom configuration to change levels and outputs.

I’ve made slight changes to Edit 1 and added some thoughts in a second edit. I’m still hoping for some additional feedback to make sure my thinking isn’t flawed. Thanks!

I don’t see any flaws per se. Others may have additional feedback, but what you’ve shown in your updated “Edit 1” looks pretty much like things I’ve done before myself. For my own scripts I prefer to use some command line options to set the log level and output, so I create the handlers and formatters in code instead of using dictConfig, but what you’re doing isn’t unusual at all.

For example, here is bandersnatch (a package mirroring tool) option to set a totally custom logging config: https://bandersnatch.readthedocs.io/en/latest/mirror_configuration.html#log-config And the same for uvicorn (an asynchronous http server): https://uvicorn.dev/settings/#logging

The only thing I’d change personally is to “hard code” the script name for the application logger instead of using __name__ (that is currently "__main__"), or perhaps try using __file__ or __module__. I find that more useful. Using __name__ in library modules is very tidy since the module paths translate directly into nice hierarchical logger names, but for a script sometimes it is easier to just have one logger named “mytool” or whatever you like.

The only other thing that stood out to me is to mention that it’s not bad practice to configure the root logger, only to use the root logger to write messages. If you send messages to the root logger, especially in a library, no one else can selectively configure or disable those messages without changing or disabling all propagated log messages. But as an application it can be very useful to configure the root logger — for example if you have 3rd party dependencies that use logging and you want to receive those messages, you can configure the root logger and let messages from all your dependencies propagate to the root, where they’ll all be written with a consistent format to the same handlers.