I’ve had many times where I wished I could easily ‘turn on/off’ debug console logging for a python application. A lot of the time, I wind up diving into the code, finding a logger and changing some of its configuration/adding a handler. Other times, I wind up just using logging.basicConfig in a random location to get logging to start.
Anyways my idea is to create two environment variables:
PYTHON_LOGGING_CONFIG_FILE_PATH : The path to a file with the dictionary schema defined in logging.config — Logging configuration — Python 3.13.1 documentation. The format of the content corresponds with the loader specified in PYTHON_LOGGING_CONFIG_FILE_LOADER. By default, the data would be expected to be a json file.
and
PYTHON_LOGGING_CONFIG_FILE_LOADER: A dotted module then classname that takes in a single parameter: the text contents of the file from PYTHON_LOGGING_CONFIG_FILE_PATH and returns a dict of the format. If this is not set, json.loads is used by default.
The second parameter allows us to support json/yaml, etc since the user could specify how to go from the str of the file contents to the dict.
After the dict is obtained, it’s just passed to a call to logging.config.dictConfig.
I don’t think something this general exists at the Python level, I’ve thought about adding something like this to my own PYTHONSTARTUP, but think it may be useful for others as well.
Hmm environment variables are usually meant for values that are shared across apps, but for logging the configuration for each app is usually different (usually in the app name for a logging API and file name for a log file, also named after the app) so as it is I don’t quite see how a shared environment variable can be very useful unless you always want everything logged to the console only.
I guess this can maybe work if there’s some additional placeholder magic added to the configuration file that gets replaced with the app name (which is usually the name of the working directory where the interpreter runs).
Not really a fan of the idea, I don’t want json in environment variables, and I generally don’t want environment variables clobbering or conflicting with anything that is defined in code. This seems like something that you should add support for on a per-application basis (perhaps something like successive -v’s decreasing the threshold logging level…)
Do I understand the use-case here: you’re using a CLI tool written in Python, it’s not your tool so you can’t add easily a feature, but you can modify the code by monkey-patching it?
In general I think tools are responsible for their own logging configuration, but it should be something the user can modify if desired. I don’t like the idea of a universal “logging override” mechanism because that seems likely to have undesired consequences (i.e. I set it up to debug something and a system utility is inadvertently affected). If it’s not universal, I’m not sure how it helps your use-case–I guess you’d want the tool to eventually adopt it.
It’s more than just a cli case. I can modify the code, but don’t really want to dive into 3rd party code to find their loggers, etc. in a perfect world if they all had a -vvv that would be fine, but it just isn’t universal.
Every python app (and I guess library) seems to have slightly different logging settings. In my eyes I sort of want to centralize on one main mechanism across all the python apps (and libs) on my boxes. Then I have one switch to set and then restart services to see logging appear via the one configuration file.
It sort of goes two ways, it can be used as defaults and also used as a quick override if desired for a specific run. I don’t know of a universal way to do either of these right now.
Would it work to customize the settings for third-party code when you configure your logger, e.g. I had this in the setup for a tool I wrote because I wanted some logging but not all of this:
root_log = logging.getLogger()
# don't need debug output for these
logging.getLogger("asyncio").setLevel(logging.INFO)
logging.getLogger("urllib3").setLevel(logging.INFO)
logging.getLogger("gcsfs").setLevel(logging.INFO)
logging.getLogger("fsspec").setLevel(logging.INFO)
# google is extra noisy, set to warnings only
logging.getLogger("googleapiclient").setLevel(logging.WARNING)
# matplotlib has a lot of debug output we don't need
logging.getLogger("matplotlib").setLevel(logging.INFO)
# numba logging is off the charts
logging.getLogger("numba").setLevel(logging.WARNING)
Nowadays I’ve been storing logging config in a toml, and I think I could accomplish all of the above in there too.
edit: as the above suggests, part of the issue is that different libraries consider different things worthy of a DEBUG or INFO log.
I think I get that now, but I think I disagree that this would be good as a flag you can just set. Such a flag would be more convenient when you wanted it, but it’s a trap if you aren’t expecting it or forgot about setting it.
A recipe for setting this up in your own startup might be a useful piece of documentation (or published in post somewhere)
One of the problems I see with this is that I don’t think it will have the desired effect.
If a library defaults it’s log level to a specific level, if you were to try to put this somewhere like sitecustomize, then when the library is imported, it’s default log level is going to be set when the library is imported, likely after sitecustomize. It’s not compatible with existing things libraries do like:
This really needs to be handled at the application level, after appropriate imports have happened if an application wants to tweak log levels from the defaults libraries have set.
That’s a fair point, though I rarely see libraries do this type of thing (set handlers or levels). They often just use their loggers and expect the application to worry about where the logs may go.
I guess that’s the thing at the end of the day: I wish there was an easy way to just turn on verbose console (or file logging) with defaults deemed sane by my preferences. My preferences of course probably differ from others.
Though like you said even if this was a thing, it would be ran before loggers would be setup by an application that actually does setup logging outputs/levels.
Dang.
Now of course there are sketchy ways to do this type of thing even in PYTHONSTARTUP, (like mock out the ability to add future handlers). But that just gets uglier and uglier. Pretty enough to play with. Ugly enough to not try to get upstreamed or use in production.
Thanks all the folks for giving thoughts. If I can come up with a clean way to make it all work maybe I’ll ask again, but for now I think this idea can be put to bed.