No-network interrogate whether dependencies are satisfied

How can I, without a network connection, interrogate Pip for whether installation dependencies are satisfied?

When I check manually, with pip list --no-index, Pip reports that a dependency is satisfied in the environment:

$ HTTPS_PROXY=localhost python3 -m pip list --no-index
Package    Version
---------- -------
pip        23.2.1
setuptools 68.1.2

But when I attempt with pip install --no-index, it insists the package is not installed:

$ HTTPS_PROXY=localhost python3 -m pip install --no-input --no-index .[test]
Processing /home/bignose/Projects/foo
  Installing build dependencies ... error
  error: subprocess-exited-with-error
  × pip subprocess to install build dependencies did not run successfully.
  │ exit code: 1
  ╰─> [2 lines of output]
      ERROR: Could not find a version that satisfies the requirement setuptools>=62.4.0 (from versions: none)
      ERROR: No matching distribution found for setuptools>=62.4.0
      [end of output]
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

So what I’m looking for, is to have the same “figure out what dependencies are not yet installed” logic and, with no network requests, simply report (via exit status, zero or non-zero) whether all dependencies are currently satisfied for a target (like “.[test]”, above).

This is in pursuit of having a simple shell-scriptable build process, to continue the build only if the test dependencies are already installed (because the tests are run in an environment with no network traffic allowed).

$ python3 -m pip install --no-input --no-index .[test]
Processing /home/bignose/Projects/foo
[… reports that some packages are not installed, others are already satisfied …]

$ if [ $? -eq 0 ] ; then python3 -m unittest ; else echo "Test dependencies not installed; aborting." ; fi

If you have packaging installed, you should be able to code something up that gets everything you have installed, get the list of all of their dependencies as listed in their METADATA files, and then use packaging.requirements to make sure every listed dependency is somehow satisfied.

As to whether pip itself can do this, I don’t know.

The reason I’m expecting pip install --no-index to provide this is because it clearly does know how to figure out, from the locally-installed packages and the declared dependencies, what packages are not yet installed; those will be the ones it then attempts to fetch from some index.

All I need is for it (or some other Pip command) to do this, without then doing the “actually fetch the packages” step.

Two things that stand out to me:

The step that is failing is the installation of build dependencies rather than runtime dependencies. To me it appears that Pip is trying to build your package to install it, and to do that it looks for build-system.requires and is starting another pip subprocess to install those. My hunch is some option is not correctly being propagated to the subprocess, but it isn’t showing the exact command it’s trying to run. A pip dev or someone more familiar with the build-backend system may be able to correct me here if I’m totally off track. (paging @pf_moore ?)

pip accepts configuration for all its CLI options via environment variables and configuration files as well. I’m curious if the behavior would change if you used an exported environment variable to specify no-index.

Second thing:

Yes and no. :grimacing: In your situation yes, since you have your dependencies pre installed it should work (and I don’t know why it’s not). But in the general case there is no way to to determine a dependency tree for an arbitrary Python package solely from static metadata, hence normally pip will download candidate packages and build them in order to get a package’s dependencies, recursively. So when operating fully offline pip will tell you the first unsatisfied dependency it encounters, but cannot know a complete list of dependencies that are missing.

You may already know that, but I thought it might be worth mentioning for posterity because it was not obvious to me (not all package managers work the same way, who knew).

Note that my use case doesn’t need an entire dependency tree. Only a boolean answer to the question: “are all dependencies for this install target, already satisfied?”

If the question leads to “we need a not-installed package to know its further dependencies”, then the answer is evidently “no” (because a needed package is not installed), and the task is done (exit status non-zero, failure).

In this case, it looks like pip check may do what you need?

(I may be misunderstanding that command though.)


I found an option for pip install that could be related to the install error you are getting! —no-build-isolation: pip install - pip documentation v23.3.1

Since build isolation is on by default, pip may be trying to build your package in an environment different from the one it will be installed into. This seems like a better fit explanation than my original hypothesis about option forwarding.

Sadly no; I don’t know what it’s "check"ing, but it seems to not care that I’m specifying a particular install target.

With packages like ‘coverage’ and ‘testscenarios’ specified in the ‘test’ optional feature, I need to know whether those are installed when I specify .[test]. But ‘pip check’ blithely assures me everything is fine:

$ python3 -m pip list
Package    Version
---------- -------
pip        23.2.1
setuptools 68.1.2

$ python3 -m pip check --no-input .[test]
No broken requirements found.

when it should be complaining that ‘coverage’ and ‘testscenarios’ are missing, for the ‘test’ feature specified.

Yeah, pip check has this bug, but unfortunately pip doesn’t have a good way of knowing what extras were meant to be installed.

Here’s some code I wrote a long time back that does what pip check does, feel free to adapt it for your use case:

import importlib.metadata
import packaging.version
import packaging.requirements
import packaging.markers
import re

def canonical_name(name: str) -> str:
    return re.sub(r"[-_.]+", "-", name).lower()

def safe_req_parse(r: str) -> packaging.requirements.Requirement | None:
        return packaging.requirements.Requirement(r)
    except packaging.requirements.InvalidRequirement:
        return None

def simple_version_of_pip_check() -> None:
    versions = {}
    reqs = {}
    for dist in importlib.metadata.distributions():
        metadata = dist.metadata
        name = canonical_name(metadata["Name"])
        versions[name] = packaging.version.parse(metadata["Version"])
        reqs[name] = [req for r in (dist.requires or []) if (req := safe_req_parse(r)) is not None]

    # Like `pip check`, we don't handle extras very well
    # This is because there's no way to tell if an extra was requested. If we wanted, we could
    # do slightly better than pip by finding all requirements that require an extra and using that
    # as a heuristic to tell if an extra was requested.

    def req_is_needed(req: packaging.requirements.Requirement) -> bool:
        if req.marker is None:
            return True
            return req.marker.evaluate()
        except packaging.markers.UndefinedEnvironmentName:
            # likely because the req has an extra
            return False

    sorted_versions = sorted(versions.items(), key=lambda x: x[0])

    for package, version in sorted_versions:
        for req in reqs[package]:
            req_name = canonical_name(

            if not req_is_needed(req):

            if req_name in versions:
                if not req.specifier.contains(versions[req_name], prereleases=True):
                        f"{package} {version} requires {req}, "
                        f"but {versions[req_name]} is installed"

            print(f"{package} {version} is missing requirement {req}")

This is the key here. The failure is because a build dependency isn’t present, and that’s because when pip needs to build a project, it creates a new, empty environment and installs the build dependencies in that. What’s in your current environment isn’t relevant.

If you want to build in your current environment, you can pass --no-build-isolation to pip, but you are then responsible for ensuring all build dependencies are present. Which seems to be what you’re doing here, so that’s likely to be the correct solution for you.

The problem you’ll hit is that pip can’t tell in advance what projects will need to be built, as opposed to having a pre-built wheel available. And pip can’t tell what build dependencies a project needs - the basics are in pyproject.toml, under build-system.requires, but the build backend can add others (setuptools, for example, requires wheel). So you need to do that investigation yourself.

pip install --dry-run --report might help you here. But I haven’t used it for anything like this, and I don’t know if it reports what you need to know (because of --dry-run, it may well skip the build step).

I had been toying with that option, but didn’t really have any success. I think that a missing part was:

The --report output is not helpful to my case. But skipping the build step is also what I need, so this --dry-run helps too.

Okay, what I have now:

$ python3 -m pip install \
    --dry-run --no-input \
    --no-index --no-build-isolation \

This can fail in various ways, but none are related to network access (eliminating distracting false positives). It will fail with exit status non-zero if dependencies are not satisfied, which is what I want; and it will do nothing, exit status zero, if the dependencies are satisfied.

I think this is as close as I’m going to get, this will help me move forward and hopefully make isolated-build-environment users happy. Thank you all!