Seeing this recently linked thread again, I remembered that I switched some of my packages to depend on a nonexistent unsupported-python package to indicate lack of support for particular Python versions. For example, in the most common case that some future version of Python will break, e.g. because of a deprecation:
[project]
name = "my-test-package"
version = "1.0.0"
dependencies = [
# package does not work on Python 3.11 and beyond
"unsupported-python>=3.11 ; python_version>='3.11'",
]
Since that dependency cannot be installed, installation on Python 3.11 fails with a more or less human-readable error message:
ERROR: Could not find a version that satisfies the requirement
unsupported-python>=3.11; python_version >= "3.11" (from test-unsupported-python)
Then I realised that someone could just register that package name and put something bad in its place. So I registered the package just in case.
However, it would be nice to have an official package like that, which is guaranteed (by people we all trust, e.g. PyPA) to never install anything, and fail with a somewhat-readable error message. I’m happy to give up the package name, if people liked it enough.
I think if we were to formalise anything, it would be a better way of warning that the package someone just installed is not supported for their version of Python yet.
I don’t like blocking or failing installs, because it prevents things like distros from testing against newer releases. Upstream developers have no obligation to support newer versions of Python until they’re ready, but I’d prefer to see it expressed through docs and (at most) runtime warnings, rather than blocks.
# in your __init__.py file
import sys
import warnings
if sys.version_info >= (3, 12):
warnings.warn("Sorry, I haven't been tested on this version of Python yet. " +
"You may need to switch to 3.11 or earlier instead.")
All you’re doing is achieving the same effect as upper capping of the Python version, while avoiding the standard mechanism for doing it. So I don’t see why the linked thread implies that your approach is any less inadvisable than using requires-python. If there is an explanation of why your approach is acceptable in that thread, please link directly to it, as it’s possible I may have missed it. But if as I suspect, there isn’t, then I’m -1 on having a second way to do something that’s generally advised against.
It appears that I had forgotten the entire point of this
The unsupported-python package must result in a package solution, and only break on installation. That’s why my original setup.py was this:
import os
import platform
from setuptools import setup
from setuptools.command.install import install
class UnsupportedPython(RuntimeError):
pass
class InstallCommand(install):
def run(self):
if os.getenv('ALLOW_UNSUPPORTED_PYTHON', None) is None:
raise UnsupportedPython('One or more of your installed packages '
'have indicated that they do not support '
'your version of Python ('
+ platform.python_version() + ')')
return super().run()
setup(cmdclass={'install': InstallCommand})
This is pretty much what @henryiii proposed in the linked thread, and what @steve.dower said above in the form of a dependency. For local testing and installation, the package could be installed using the ALLOW_UNSUPPORTED_PYTHON environment variable.
OK, so that won’t work if you build a wheel and then install from the wheel, as it’s relying on the setup.py install command, which is deprecated (setuptools no longer supports direct invocation of setup.py and pip is in the process of removing the setup.py install code path).
Have you checked that this does what you thing it does? Upon a build failure pip will start downgrading dependencies till it is not able to install or it dies trying his best to do so. I’m not sure, but I think that a failure in running setup.py install is treated in a way equivalent to a wheel build failure. I thought that what you were trying to achieve is avoiding pip downgrading packages to resolve build dependencies that cannot be installed because of python-version requirements.
The point is that (except for deprecated legacy code that’s going away) pip won’t ever callsetup.py install, so your code won’t get executed.
Have you tried your approach, building wheels for everything and then pip install-ing the wheels?
To be clear, I’m not trying to achieve anything here. I don’t have a need for this functionality or an interest in it. I’m simply trying to explain how pip works for you.
I have been using this technique (the correct one with exception in setup.py) to guard against too-new versions of Python causing too-old versions of numba to be installed, which was a common source of error reports. It seems to work as intended?
Indeed, this relies on it being a sdist. But ignore my package and code snipped: It would be nice to have this fail-safe mechanism implemented in some official capacity, so that unknowledgeable users like myself don’t have to figure out what to do.
It will stop working in the near future, when pip switches to installing all sdists by building a wheel from them in an isolated environment, and then installing that wheel. That’s the point I’m trying to get across to you. It will work for now (in some cases) but it will stop working when build-wheel-then-install becomes the default (and only) sdist install method.
You can see this now by running pip with the --use-pep517 flag (which enables the new behaviour).
What fail-safe mechanism precisely? Blocking use of a package in newer Python versions? That’s enforcing an upper limit on the Python version, which is the behaviour that we’re recommending against. So if that’s the behaviour you mean, it’s not going to become officially supported - quite the opposite.
It works because test-unsupported-python depends directly on unsupported-python. If you have a third package foo that depends on test-unsupported-python without a version specifier, pip will happily downgrade test-unsupported-python till it can install foo.
I see, thanks! Moving the exception to the build command in setup.py seems to be a simple workaround.
Yes, failing a known-to-be-broken solution of packages at installation time without causing any side effects, such as older versions of dependencies to be installed.
So what is the recommended way to deal with the situation where a requirement sets requires-python<3.11 and you want to prevent that from installing an outdated version of said requirement under Python 3.11?
I created another package uses-test-unsupported-python that depends on an unpinned test-unsupported-python: