How can I, without a network connection, interrogate Pip for whether installation dependencies are satisfied?
When I check manually, with pip list --no-index, Pip reports that a dependency is satisfied in the environment:
$ HTTPS_PROXY=localhost python3 -m pip list --no-index
Package Version
---------- -------
pip 23.2.1
setuptools 68.1.2
But when I attempt with pip install --no-index, it insists the package is not installed:
$ HTTPS_PROXY=localhost python3 -m pip install --no-input --no-index .[test]
Processing /home/bignose/Projects/foo
Installing build dependencies ... error
error: subprocess-exited-with-error
Ć pip subprocess to install build dependencies did not run successfully.
ā exit code: 1
ā°ā> [2 lines of output]
ERROR: Could not find a version that satisfies the requirement setuptools>=62.4.0 (from versions: none)
ERROR: No matching distribution found for setuptools>=62.4.0
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Ć pip subprocess to install build dependencies did not run successfully.
ā exit code: 1
ā°ā> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
So what Iām looking for, is to have the same āfigure out what dependencies are not yet installedā logic and, with no network requests, simply report (via exit status, zero or non-zero) whether all dependencies are currently satisfied for a target (like ā.[test]ā, above).
This is in pursuit of having a simple shell-scriptable build process, to continue the build only if the test dependencies are already installed (because the tests are run in an environment with no network traffic allowed).
$ python3 -m pip install --no-input --no-index .[test]
Processing /home/bignose/Projects/foo
[⦠reports that some packages are not installed, others are already satisfied ā¦]
$ if [ $? -eq 0 ] ; then python3 -m unittest ; else echo "Test dependencies not installed; aborting." ; fi
If you have packaging installed, you should be able to code something up that gets everything you have installed, get the list of all of their dependencies as listed in their METADATA files, and then use packaging.requirements to make sure every listed dependency is somehow satisfied.
As to whether pip itself can do this, I donāt know.
The reason Iām expecting pip install --no-index to provide this is because it clearly does know how to figure out, from the locally-installed packages and the declared dependencies, what packages are not yet installed; those will be the ones it then attempts to fetch from some index.
All I need is for it (or some other Pip command) to do this, without then doing the āactually fetch the packagesā step.
The step that is failing is the installation of build dependencies rather than runtime dependencies. To me it appears that Pip is trying to build your package to install it, and to do that it looks for build-system.requires and is starting another pip subprocess to install those. My hunch is some option is not correctly being propagated to the subprocess, but it isnāt showing the exact command itās trying to run. A pip dev or someone more familiar with the build-backend system may be able to correct me here if Iām totally off track. (paging @pf_moore ?)
pip accepts configuration for all its CLI options via environment variables and configuration files as well. Iām curious if the behavior would change if you used an exported environment variable to specify no-index.
Second thing:
Yes and no. In your situation yes, since you have your dependencies pre installed it should work (and I donāt know why itās not). But in the general case there is no way to to determine a dependency tree for an arbitrary Python package solely from static metadata, hence normally pip will download candidate packages and build them in order to get a packageās dependencies, recursively. So when operating fully offline pip will tell you the first unsatisfied dependency it encounters, but cannot know a complete list of dependencies that are missing.
You may already know that, but I thought it might be worth mentioning for posterity because it was not obvious to me (not all package managers work the same way, who knew).
Note that my use case doesnāt need an entire dependency tree. Only a boolean answer to the question: āare all dependencies for this install target, already satisfied?ā
If the question leads to āwe need a not-installed package to know its further dependenciesā, then the answer is evidently ānoā (because a needed package is not installed), and the task is done (exit status non-zero, failure).
I found an option for pip install that could be related to the install error you are getting! āno-build-isolation: pip install - pip documentation v23.3.1
Since build isolation is on by default, pip may be trying to build your package in an environment different from the one it will be installed into. This seems like a better fit explanation than my original hypothesis about option forwarding.
Sadly no; I donāt know what itās "check"ing, but it seems to not care that Iām specifying a particular install target.
With packages like ācoverageā and ātestscenariosā specified in the ātestā optional feature, I need to know whether those are installed when I specify .[test]. But āpip checkā blithely assures me everything is fine:
$ python3 -m pip list
Package Version
---------- -------
pip 23.2.1
setuptools 68.1.2
$ python3 -m pip check --no-input .[test]
No broken requirements found.
when it should be complaining that ācoverageā and ātestscenariosā are missing, for the ātestā feature specified.
Yeah, pip check has this bug, but unfortunately pip doesnāt have a good way of knowing what extras were meant to be installed.
Hereās some code I wrote a long time back that does what pip check does, feel free to adapt it for your use case:
import importlib.metadata
import packaging.version
import packaging.requirements
import packaging.markers
import re
def canonical_name(name: str) -> str:
return re.sub(r"[-_.]+", "-", name).lower()
def safe_req_parse(r: str) -> packaging.requirements.Requirement | None:
# https://github.com/pypa/packaging/issues/494
try:
return packaging.requirements.Requirement(r)
except packaging.requirements.InvalidRequirement:
return None
def simple_version_of_pip_check() -> None:
versions = {}
reqs = {}
for dist in importlib.metadata.distributions():
metadata = dist.metadata
name = canonical_name(metadata["Name"])
versions[name] = packaging.version.parse(metadata["Version"])
reqs[name] = [req for r in (dist.requires or []) if (req := safe_req_parse(r)) is not None]
# Like `pip check`, we don't handle extras very well https://github.com/pypa/pip/issues/4086
# This is because there's no way to tell if an extra was requested. If we wanted, we could
# do slightly better than pip by finding all requirements that require an extra and using that
# as a heuristic to tell if an extra was requested.
def req_is_needed(req: packaging.requirements.Requirement) -> bool:
if req.marker is None:
return True
try:
return req.marker.evaluate()
except packaging.markers.UndefinedEnvironmentName:
# likely because the req has an extra
return False
sorted_versions = sorted(versions.items(), key=lambda x: x[0])
for package, version in sorted_versions:
for req in reqs[package]:
req_name = canonical_name(req.name)
if not req_is_needed(req):
continue
if req_name in versions:
if not req.specifier.contains(versions[req_name], prereleases=True):
print(
f"{package} {version} requires {req}, "
f"but {versions[req_name]} is installed"
)
continue
print(f"{package} {version} is missing requirement {req}")
This is the key here. The failure is because a build dependency isnāt present, and thatās because when pip needs to build a project, it creates a new, empty environment and installs the build dependencies in that. Whatās in your current environment isnāt relevant.
If you want to build in your current environment, you can pass --no-build-isolation to pip, but you are then responsible for ensuring all build dependencies are present. Which seems to be what youāre doing here, so thatās likely to be the correct solution for you.
The problem youāll hit is that pip canāt tell in advance what projects will need to be built, as opposed to having a pre-built wheel available. And pip canāt tell what build dependencies a project needs - the basics are in pyproject.toml, under build-system.requires, but the build backend can add others (setuptools, for example, requires wheel). So you need to do that investigation yourself.
pip install --dry-run --report might help you here. But I havenāt used it for anything like this, and I donāt know if it reports what you need to know (because of --dry-run, it may well skip the build step).
This can fail in various ways, but none are related to network access (eliminating distracting false positives). It will fail with exit status non-zero if dependencies are not satisfied, which is what I want; and it will do nothing, exit status zero, if the dependencies are satisfied.
I think this is as close as Iām going to get, this will help me move forward and hopefully make isolated-build-environment users happy. Thank you all!