Seeing this recently linked thread again, I remembered that I switched some of my packages to depend on a nonexistent
unsupported-python package to indicate lack of support for particular Python versions. For example, in the most common case that some future version of Python will break, e.g. because of a deprecation:
name = "my-test-package"
version = "1.0.0"
dependencies = [
# package does not work on Python 3.11 and beyond
"unsupported-python>=3.11 ; python_version>='3.11'",
Since that dependency cannot be installed, installation on Python 3.11 fails with a more or less human-readable error message:
ERROR: Could not find a version that satisfies the requirement
unsupported-python>=3.11; python_version >= "3.11" (from test-unsupported-python)
Then I realised that someone could just register that package name and put something bad in its place. So I registered the package just in case.
However, it would be nice to have an official package like that, which is guaranteed (by people we all trust, e.g. PyPA) to never install anything, and fail with a somewhat-readable error message. I’m happy to give up the package name, if people liked it enough.
How is this different from setting
I think if we were to formalise anything, it would be a better way of warning that the package someone just installed is not supported for their version of Python yet.
I don’t like blocking or failing installs, because it prevents things like distros from testing against newer releases. Upstream developers have no obligation to support newer versions of Python until they’re ready, but I’d prefer to see it expressed through docs and (at most) runtime warnings, rather than blocks.
Please see the linked thread.
That would be great! Still it would be nice to have a workaround until that time comes.
The time has come
# in your __init__.py file
if sys.version_info >= (3, 12):
warnings.warn("Sorry, I haven't been tested on this version of Python yet. " +
"You may need to switch to 3.11 or earlier instead.")
All you’re doing is achieving the same effect as upper capping of the Python version, while avoiding the standard mechanism for doing it. So I don’t see why the linked thread implies that your approach is any less inadvisable than using
requires-python. If there is an explanation of why your approach is acceptable in that thread, please link directly to it, as it’s possible I may have missed it. But if as I suspect, there isn’t, then I’m -1 on having a second way to do something that’s generally advised against.
It appears that I had forgotten the entire point of this
unsupported-python package must result in a package solution, and only break on installation. That’s why my original setup.py was this:
from setuptools import setup
from setuptools.command.install import install
if os.getenv('ALLOW_UNSUPPORTED_PYTHON', None) is None:
raise UnsupportedPython('One or more of your installed packages '
'have indicated that they do not support '
'your version of Python ('
+ platform.python_version() + ')')
This is pretty much what @henryiii proposed in the linked thread, and what @steve.dower said above in the form of a dependency. For local testing and installation, the package could be installed using the
ALLOW_UNSUPPORTED_PYTHON environment variable.
OK, so that won’t work if you build a wheel and then install from the wheel, as it’s relying on the
setup.py install command, which is deprecated (setuptools no longer supports direct invocation of
setup.py and pip is in the process of removing the
setup.py install code path).
Have you checked that this does what you thing it does? Upon a build failure pip will start downgrading dependencies till it is not able to install or it dies trying his best to do so. I’m not sure, but I think that a failure in running
setup.py install is treated in a way equivalent to a wheel build failure. I thought that what you were trying to achieve is avoiding pip downgrading packages to resolve build dependencies that cannot be installed because of
The point is that (except for deprecated legacy code that’s going away) pip won’t ever call
setup.py install, so your code won’t get executed.
Have you tried your approach, building wheels for everything and then
pip install-ing the wheels?
To be clear, I’m not trying to achieve anything here. I don’t have a need for this functionality or an interest in it. I’m simply trying to explain how pip works for you.
I have been using this technique (the correct one with exception in setup.py) to guard against too-new versions of Python causing too-old versions of numba to be installed, which was a common source of error reports. It seems to work as intended?
Using Python 3.11, this should work:
pip install -i https://test.pypi.org/simple/ test-unsupported-python==1.4.0
Whereas this should fail without attempting to fall back to 1.4.0:
pip install -i https://test.pypi.org/simple/ test-unsupported-python==1.4.1
Indeed, this relies on it being a sdist. But ignore my package and code snipped: It would be nice to have this fail-safe mechanism implemented in some official capacity, so that unknowledgeable users like myself don’t have to figure out what to do.
It will stop working in the near future, when pip switches to installing all sdists by building a wheel from them in an isolated environment, and then installing that wheel. That’s the point I’m trying to get across to you. It will work for now (in some cases) but it will stop working when build-wheel-then-install becomes the default (and only) sdist install method.
You can see this now by running pip with the
--use-pep517 flag (which enables the new behaviour).
What fail-safe mechanism precisely? Blocking use of a package in newer Python versions? That’s enforcing an upper limit on the Python version, which is the behaviour that we’re recommending against. So if that’s the behaviour you mean, it’s not going to become officially supported - quite the opposite.
As of the latest release, pip no longer invokes
It works because
test-unsupported-python depends directly on
unsupported-python. If you have a third package
foo that depends on
test-unsupported-python without a version specifier, pip will happily downgrade
test-unsupported-python till it can install
I see, thanks! Moving the exception to the
build command in setup.py seems to be a simple workaround.
Yes, failing a known-to-be-broken solution of packages at installation time without causing any side effects, such as older versions of dependencies to be installed.
So what is the recommended way to deal with the situation where a requirement sets
requires-python<3.11 and you want to prevent that from installing an outdated version of said requirement under Python 3.11?
I created another package
uses-test-unsupported-python that depends on an unpinned
pip install -i https://test.pypi.org/simple/ --extra-index https://pypi.org/simple/ uses-test-unsupported-python
It fails as expected on trying to install
test-unsupported-python-1.4.1 without falling back to
As per the linked article, and the article it in turn links to don’t do that. That’s precisely why upper limits on versions are bad.
But this is about the case where it’s not me setting the upper limit, it’s some other dependency over which I have no control.
Then report it as a bug to them, and link them to the explanations why.
Or convincingly persuade the pip maintainers that ignoring upper
bounds on requires-python is the way of the future.
One thing to add, that is to ignore the upper bound of
requires-python when locking and raise an error when installing.
We’ll need to adjust the text of PEP 345 and update the implementations of package managers.