Abstract I propose adding a lightweight module to the Python Standard Library to serve as a runtime diagnostic tool. It would verify that the current Python environment is correctly configured, ensure C-extensions are properly linked, and confirm that the underlying OS provides necessary resources (such as entropy, filesystem permissions, network sockets, etc.).
The Problem Currently, Python lacks a built-in mechanism for such checks. While Lib/test (the regression suite) exists, it is designed for language developers to verify correctness, not for end-users to verify environment integrity.
Users facing âbrokenâ environments (e.g., missing OpenSSL headers, corrupted paths, permission issues in Docker/Linux) currently have to debug ad-hoc. This proposal introduces a standardized âsanity checkâ similar to rustup check, brew doctor, or Neovimâs :checkhealth.
The Proposal The module would expose a simple CLI entry point:
python -m do_check
It would perform non-destructive verification of:
Critical C-extensions: Ensuring math, json, ctypes are importable and functional.
Permissions: Checking write permissions in CWD and Temp.
System Resources: verifying availability of system entropy (random).
External Libraries: Verifying ssl context creation and sqlite3 availability.
If there are any type of questions regarding the Idea, then feel free to comment!
Bear in mind that most extension modules and the external libraries they depend on are optional and that limiting permissions is a normal, healthy security precaution.
At best, youâd need a configurable health check that can be told either what kind of deployment Python is running under or what checks to ignore.
That is a really good point regarding hardened environments and optional dependencies. I definitely agree that we shouldnât flag a read-only container as âbrokenâ when that is the intended security posture.
Perhaps the solution is to treat the module as a Capability Reporter rather than a strict Pass/Fail test?
Instead of exiting with an error, it could output a structured summary of what is and isnât available. Or, as you suggested, we could support CLI flags/profiles to target the check:
python -m healthcheck --strict (for development machines)
This way, it becomes a tool for auditing the environment rather than just validating it.
Regarding the naming (do_check or healthcheck), I treated my suggestion simply as a placeholder. I am happy to defer the final naming decision to the Core Team/Community to ensure it fits Standard Library conventions.
I could certainly publish this as a PyPI module (and likely will as a prototype), but that runs into a critical âBootstrap Paradoxâ:
If a userâs environment is broken (e.g. SSL is missing, or ctypes is failing), pip install often fails as well. So a user effectively cannot install the tool to diagnose why their installation is broken, because the installation process itself relies on a healthy environment.
I believe this functionality belongs in the Standard Library because it acts as a âFirst Responderâ, it needs to be available before external packages can be successfully installed.
It is an interesting location for it, specifically for the pip dependency checks.
My main concern with ensurepip is availability and scope though.
On some popular distributions such as Debian or Ubuntu, ensurepip is often split out into a separate package so if the user has a minimal/broken install, ensurepip itself might be missing, rendering the tool inaccessible.
The vision of the tool is checking things unrelated to pip/packaging such as sqlite3 availability, math precision, or basic file system permissions for the application itself.
I feel like placing it in ensurepip might hide it from users who are debugging general runtime issues, not just installation issues.
It should be a PyPI module to start anyway so that you can benefit from a quick release cycle. IMO, wait until you have a stable version, then submit a PEP to have it included.
Not a bad idea IMO. You may also want to expose the health-check output (e.g., through a standard output format) so that programs that rely on Python (like NeoVim) can run the healthcheck and report the results to users.
That is very sound advice. I agree that iterating on PyPI is the best way to refine the logic and catch edge cases without being tied to the slower Standard Library release cycle.
I am currently still on the planning board to think about what comes in and what doesnât come in.
Regarding the output: The idea of a standard output format (likely JSON) for external tools like Neovim is excellent. I will definitely include a --json flag in the prototype to support that integration.
My long-term goal remains the Standard Library (via a PEP, as you suggested) to address the âBootstrap Paradoxâ where pip itself is broken, but I will focus on stabilizing the feature set externally first.
But those distros may also decide to not include the healthcheck module for the similar reasons they donât include ensurepip.
I would suggest you developed it as a single-file module that can be easily downloaded from some static url (similar to old get-pip.py). Then running it would just be two steps in the command line on any platform with any kind of internet access.
That is a valid concern regarding the distribution policies,
however I believe the distinction lies why certain things are often stripped.
To my knowledge distributions typically remove some things is because it bundles binary wheels and encourages bypassing the system package manager.
In contrast, my idea would be a lightweight, pure-Python diagnostic module with zero side effects. It doesnât install any packages or modify the system, so there is little incentive for mantainers to strip it out (similar to how they donât strip unittest or logging).
Regarding the single-file download suggestion: That is a good fallback, but it relies on Network and SSL being functional.
If for example a user is facing an SSL certificate issue (common in corporate /proxy enviroments) or is on an air-gapped machine, they cannot download the script. Having it in the standard library ensures the diagnostic tool is available even when the bridge to the outside world is down.
Personally, Iâd rather see more things no longer be considered optional modules. Itâs not like people actually defensively import for core functionality expected to be present, and itâs increasingly less reasonable over time that things like ssl support are absent on a system.
Perhaps that environment will deign to include your third-party script in their standard install?
I donât think itâs worth debating stdlib inclusion at this pointâitâs too early. That debate can happen if/when thereâs a real tool to discuss.
That is a fair assessment. I agree that debating the distribution echanics is premature without a mature reference implementation to point to.
I will follow the advice from you and Neil: Iâll focus on building out the PyPI package first. Iâll implement the JSON reporting and refine the âCapability Reportingâ logic based on the feedback here Once the tool has proven its utility in the wild, I will return with data to discuss standard library includion.
I completly agree. Ideally, the ecosystem would be strict enough that these chcks wouldnât be necessary. But until we reach that state, I hope this tool can help users navigate the curent reality of âoptionalâ modules.
Thank you all for the initial feedback, it has been incredibly valuable!
I like this idea. Iâve been burned several times by using a self-compiled CPython that has invalid state. Optional modules aside, something like this would be very helpful for issue triage, because we could automatically deduce some information like âis this on a system that PEP 11 supports?â or âis this on a security-only Python version?â
If this were to be a PyPI package, I think this would be a lot less beneficial. Youâd need pip to get the package, which requires a working interpreter!
Perhaps a useful feature would be to check the health of a different Python installationâso you could install from PyPI with a working version to check the health of a nonworking one.
This doesnât solve every use-case, but would help sometimes. I donât think thereâs harm in releasing on PyPI.
Iâm not so sure that this should be a module at all. This makes much more sense as a CLI flag (e.g. python --doctor), because that wonât require invocation of the eval loop (which might be broken) to work.
Thank you for validating the âBootstrap Paradoxââthat is exactly my fear with the PyPI-only route.
âregarding the implementation (-m healthcheck module vs --doctor flag):
You are absolutely right that a module requires the import system to be functional, whereas a C-level flag (âdoctor) could theoretically run even earlier in the startup sequence.
âI believe starting as a standard library module (written in Python) is the pragmatic âStep 1.â It solves 95% of cases (missing stdlib modules, broken paths, SSL issues) where the interpreter basically starts but is functionally crippled.
âMoving it to a C-level flag (âdoctor) would be the ultimate robust solution, but perhaps that could be an optimization for later?
âAlso, I love the point about Issue Triage. If users could just attach the output of python -m healthcheck --json to a GitHub issue, it would save maintainers so much time asking âWhat OS are you on? Do you have OpenSSL linked?â
Another benefit of a PyPI module (and/or a downloadable script) is that it can be backwards compatible, which allows Python <= 3.14 to use it. A built-in version would be most useful, but will take the longest to become widespread.