Is this documented as best-practice somewhere?
Clicked too fast. I meant to add:
I think first there should be an agreed-upon best-practices solution for projects that, for whatever reason they decide, want to reject newer versions of Python. The desire is to eventually provide support, but on their timeline, and not on the timeline of CPython.
No, the whole point of a lock file is to allow an environment to be recreated exactly somewhere else. You don’t have control over your Python version. Maybe you made your lock file on Python 3.6.0. Your CI system has 3.6.1. Requires-Python allows patch versions -
>=3.6.1 is very common, in fact! You must be able to restore your environment on different versions. Poetry (and PDM, etc) take a range (unfortunately, they force this range to be the one you put in Requires-Python if you distribute on PyPI!), and they solve for that range. So if you write >=3.6, every package it finds must also include 3.6-. If a package does not monotonically increase in Python version upper bounds, this procedure will get old versions. If it can’t find an unbounded result, it will force you to add a bound in order to solve. It’s not “wrong”, but it’s very unhelpful, forces you to set your metadata based on a lock file (which for a library is unimportant, and for a PyPI application is also unimportant, it’s only for applications that are not PyPI distributed, at least unless PEP 665 support was added to PyPI packaging) and produces solves that users don’t expect if a bound is ever lowered.
Mutable metadata, besides being a major undertaking, is requiring every package author to maintain every version they’ve ever released (at least the metadata for it). It also will have issues with automated scripts - slapping an upper cap on all older copies of a package can have problems (what if that dependency was dropped or changed? What if it really was supported? What if the problem is Python dependent? What about environment markers? Etc.) If a package doesn’t change the metadata, we still are in the same problem as today - so the options given still would be useful, IMO. In fact, for option 1, I’d add: “You are not allowed to cap Python version in a wheel/SDist. However, it is allowed as a
.patch file.” That would avoid normalizing version capping, and would clearly indicate pre-guessing is not recommending practice.
Python never gets locked, so “1.2” is not a valid choice - it must work in the entire range. If you put
python= ">=3.6,<3.9", then every dependency it discovers must be valid in that range. Or if you put
>=3.6.1", it must find a solve where every dependency satisfies that. And it’s perfectly happy to go back as far as it needs to to satisfy that. If you want to “lock” the python version, you have to specify
python=="3.9.9", and that’s what goes into your Requires-Python slot. It really should have two settings, one for Requires-Python, and one for solving. In fact, I’d personally like to leave the solving one blank, then have it tell me what the final range is. This would also have the nice effect that it would never back solve to get a “better” Python upper bound.
There is no agreed-upon best-practices solution for rejecting Windows. Why is rejecting Python versions special? Because the name of Requires-Python and the fact it’s a free-form slot and the fact the wording in the PEP/standard is terrible makes it seem like it’s already providing this (it was not supposed to - it was added to fix solves, not break them). People don’t control their Python version, and Python is supposed to be forward compatible, so providing a really easy way to force forward incompatibility seems destructive for 90% of packages.
And it’s sometimes OS dependent! NumPy released Linux 3.10 wheels well before anything else. The reason given for why SciPy didn’t support 3.8 right away was due to the Windows change. Etc.
In fact, what the large data science packages actually want is the ability to force wheels without user intervention. If a package could tell Pip not to fall back on SDists unless the user asked for an SDist, that would solve the problem for most of the packages wanting this feature. In fact, PyTorch deleted all their old SDists (yanking was not enough) just to ensure pip never tried to fall back to an SDist if it couldn’t install a binary.
Large packages have hundreds of thousands of users, many of whom will open issues in droves if anything fails mysteriously, which is why they want these sorts of fails.
If that’s true then why not propose an addition to requirements metadata that says only binary distributions are valid? This is possible at the user level with pip (
--binary-only numpy) so there’s some level of prior art here.
Of course, for such a proposal to be viable, you’d have to persuade the projects who currently feel that capping the python version solves their problem, that this would be a better solution for them. Otherwise we’ll just have two proposals, and still be arguing over whether capping is a good idea
Okay, thanks for explaining this! It seems that the Python version restriction isn’t just saying “I support anything in this range”—it’s saying “I must support everything in this range”. This is fundamentally different than any of the other package requirements. This explains why Poetry cannot accept a dependency whose Python version restriction is tighter than your project’s version restriction.
Another option would be this:
Add another option alongside the Python version constraint called the Python version support requirement.
The version constraint X says: This project cannot install on any version that doesn’t match the constraint.
The version support requirement Y ⊆ X says: The intersection of all of this project’s dependencies’ support requirements must be a superset of Y. In other words, it’s a promise that this project will work on all of Y.
We currently only have a setting for Y. Projects like scipy would like to set X, but can’t, so they set Y as a bad proxy.
I’m not sure this would solve everything though since you still want mutable metadata to update Y in case something breaks in a new version or some dependency adds a restriction to their X.
Oh! Looks like you have the exact same idea, but your leaving blank idea is even better because then the need for mutating metadata I mentioned above is greatly reduced. If some dependency adds a restriction on X, your Y would be automatically updated.
So, in short, if no one is specifying Y, then why can’t we do this:
- Change the meaning of the Python version requirement to X,
- Change package tools to treat it as such, which means not ensuring that all projects support everything in the Python version requirement, but rather calculating Y as the intersection of every dependency’s X, and succeeding if your virtual environment is in that range.
I can imagine how that’s a major undertaking on the PyPi, but with a few scripts to cover the most common cases, it shouldn’t be such a chore for package authors. The usual case is that a release of Python (say 3.10) breaks you, so you run some script that “ensures all releases are at least <3.10”. Then your next release is uncapped.
Mutable metadata really is the idea solution notwithstanding the work it might take to do it.
Yes, I submitted an issue about this: Poetry refuses to upgrade projects that bound the Python version · Issue #4292 · python-poetry/poetry · GitHub
It looks to me like there’s both a pragmatic “what do we do given the current state of tools” argument (which is the one @henryiii is making), and a more principled “what’s the design we’d ideally like to have” argument. A few thoughts:
- The current state of things clearly doesn’t work well. Editing the
requires-pythondefinition to forbid upper bounds is one way out. And that way is the least amount of work (this is important).
- On the other hand, from a design perspective removing accurate metadata in order to let an installer tool download an sdist which that metadata indicated was not going to work and then have the build system error out (potentially after setting up an isolated build env which may download a lot more data first - for scientific projects this could be hundreds of MBs, and if you rely on the likes of TensorFlow or PyTorch potentially >1 GB), is a poor design choice long-term.
- Python is just one of many dependencies. It’s treated differently on PyPI than other build-time or runtime requirements, however (a) there are tools that can install/control Python versions, and (b) sdists are not only for PyPI - they are also for conda-forge, Homebrew, Linux distros, etc. all of which treat Python the same as other dependencies.
- Matthias’ proposal for editable metadata is probably the best long-term design (and goes together with @henryiii’s idea 2, “implement upper capping properly”). It’s way too much work to consider only for this
python_requiresissue, however it’s very valuable for adding caps to other dependencies long-term. For conda-forge this is possible, and many maintainers describe that capability as a life-saver.
- The current description of
requires-pythonis not “terribly worded”, it’s just the intuitive way of describing a dependency requirement, and it matches what I (and I assume many other maintainers) would assume if I wasn’t familiar with this discussion - supporting the PEP 508 specification language. We recently had a discussion on the Pip issue tracker about why build and runtime dependencies are treated so differently (the former cannot be overridden), and the conclusion there was also that there’s no good reason for that. The design reasoning is similar here; Python is not special enough that upper caps must be forbidden.
On the need for this:
- On the list of real-world issues that we (scientific package maintainers) have with packaging, this doesn’t rank very high. If the outcome is that we go with erroring out in the build system, we can live with this for some years to come.
- Our concerns are real though. We’ve always had this metadata info in all release notes “this release supports Python 3.8-3.10”, and users(/packagers) do not read release notes. Improving metadata quality and not downloading a lot of data before erroring out does matter.
- As already pointed out by a few others, the “you don’t know if it will or won’t work, hence you should not cap” is extremely misguided as the blanket response to caps. Package authors should default to no caps in the vast majority of cases, but there are valid reasons to add caps on any dependency (as also laid out in @henryiii’s excellent recent blog post). For packages like NumPy and SciPy we are sure things will break with future Python versions, so a cap is valid. Note that we do think about this carefully - for example for NumPy 1.21.2, released before Python 3.10rc1, we already set the cap to
<3.11because we planned to upload wheels later on, after Python became ABI-stable and we had our wheel build infra updated.
- It’s also worth pointing out that this is not just about NumPy and SciPy. The way sdists are treated in general by install tools isn’t great, which is causing other projects to not upload sdists at all. For example, take what are probably the three largest and most actively developed Python projects (several dozens to several hundreds of full-time engineers): TensorFlow, PyTorch and RAPIDS. The latter has given up on PyPI completely, and the former two do not upload sdists, because they are too problematic (failed installs highly likely) - which is a shame, because sdists have significant value for archival and code-flow-to-packagers reasons. This
requires-pythonissue is not a main driver for not having those sdists, but it does show how problematic it is to try installing sdists that aren’t going to work.
On locking install tools:
- Poetry and PDM clearly have usability issues here.
- The Poetry/PDM behavior, and the resulting flow of packages to PyPI with unnecessary caps seems to drive most of the opposition to adding any caps at all. This is understandable, but it’d be much better to push those tools to stop doing that rather than to continue pushing back on all caps.
This isn’t true. The transition mechanism I had in mind for SciPy is to upload a new sdist for the last release which was missing the upper cap in
requires-python to error out in
setup.py with a clear error message. That’d be equivalent to what @henryiii is advocating for (modulo it doesn’t solve the immediate issue with locking solvers), and that then becomes irrelevant once the final design is implemented in install tools.
@henryiii is correct that this is a much more import wish/problem. It’s a little orthogonal though, as I hope my first points on pragmatism vs. good long-term design made clear.
@pf_moore that’d be great, and is Speculative: --only-binary by default? · Issue #9140 · pypa/pip · GitHub (your original proposal). It’s a significant amount of work, and it’s still not clear to me that it has enough buy-in from install tool maintainers (?). I already replied on the issue after you asked about potential funding:“If it looks like there will be buy in for this idea from the relevant maintainers/parties, I’d be happy to lead the obtain-funding part.”. Still happy to do that, and confident I can actually obtain that funding in 2022. I’m not quite prepared to do the significant amount of work of getting all the buy-in we need before arranging funding though, or to arrange funding and then don’t get it done because of lack of consensus. So this is a bit of a chicken-and-egg problem. Someone within the PyPA who understands what’s needed and has connections to the relevant parties would be better placed to do this initial alignment (if a smaller amount of funded time would help there, please let me know - that’s easier to arrange).
Second thought on this: it must be the default. Any user level opt-in switch (e.g. writing in your docs
pip install --only-binary scipy) is useless, because users don’t read docs - and when you have O(20 million) users, that’ll be a lot of bug reports and wasted time.
Third thought on this and on capping in general: about half of all Python users are scientific / data science users now. These users are not developers - they are scientists and engineers first, and programming is a tool to do their actual job. Expecting them to figure out how to fix up their install commands after a new release of some dependency has broken their
pip install some_pkg is a poor idea. The prevalent attitude to caps around here is “don’t add them, when it breaks just fix it”. This just plain doesn’t work for these users. And unfortunately these users do sometimes(/regularly) work in places with outdated (or non-Linux) HPC systems, and may therefore need to build from an sdist. So building from sdists therefore needs to be reliable - I wish we could rely on “only build from source if you’re an expert”, but we can’t.
Thanks for the extremely well thought out response. I don’t have time to make a detailed response here (and honestly, I’m not really the person who should) but I’d like to make one point here from the perspective of a pip maintainer which might be getting overlooked.
Sigh. I just can’t write a “quick reply”, no matter how hard I try
tl;dr; Any proposals here need to be very clear whether they are looking at the “theoretical” problems with caps, or at “how pip can hack around the issue” (with a side question of “what about other installers that may exist, now or in future”)
Source distribution handling in pip is a huge mess of heuristics, backward compatibility hacks, and out-and-out guesses as to what the right behaviour should be.
Any comments on this thread about what “installers should do” fills me with dread, because implementing it in pip will no doubt trigger an extended and draining debate on edge cases and failure modes.
The issue here is fundamentally that dependency solvers are complex, and the key algorithms are based on principles that don’t apply in Python packaging (namely, that the problem can be statically defined in advance). In developing the new resolver, the pip developers have had to make compromises and design decisions to adapt existing approaches to the realities of Python packages. (And Poetry and conda have made their own, different, compromises, which is why we have to be careful here to not agree on a solution that only works for pip). Wheels are fairly simple to handle, because the only issue we have to address is the fact that metadata isn’t available “up front” but must be introduced on demand. Source distributions, however, are a nightmare, because builds might fail, builds have their own dependencies, building a sdist or even just getting the metadata can be hugely expensive, etc.
@henryiii is working from a rather theoretical model, where a “solver” finds a suitable set of things to install, based on the available constraints. That’s how the example of pip downloading and trying to build numba 0.51 comes about. Under that model, Python version caps are a problem because they aren’t applied correctly (all older versions of numba should cap the Python version, but they can’t because of immutable metadata).
The proposed solutions (apart from “do nothing”), however, are not looking at that theoretical model, they are rather adding heuristics outside of that model.
- Error out if an upper bound is detected. Anywhere, even on a wheel? on a sdist? What even does “detected” mean? What if pip scans numba 0.52 first for some unknown reason, finds an upper cap of Python<=3.9 and errors, even though numba 0.53 supports 3.10?
- Ignore the back-search if an upper bound is detected. Again, what is “detected” here? What back-search? This assumes a certain implementation method. Pip does search backward through valid versions, because we prefer to install newer versions when possible, but that’s not guaranteed - all we actually promise is that we will find some valid solve. I have no idea if conda or poetry work like this (I believe conda uses a SAT solver, which may not do this at all).
Not all solvers use a backtracking model (althouth our research for pip suggests that there’s currently no usable solver that handles “on demand” metadata apart from backtracking ones), and not all backtracking models necessarily follow a strict “latest to earliest” scan of versions (although pip currently does in most cases - probably! Our prioritisation logic is distinctly non-trivial and I wouldn’t guarantee we never check numba 0.51 before numba 0.52…)
Actually, I can describe how that could happen, but I won’t unless asked, because this post is already too long… ↩︎
My understanding is the following:
- Run the package resolution as normal, except ignore any upper bound on the Python version during the resolving
- If the resolved package has a requires-Python field in its metadata, and that field specified an upper bound for the Python version, then perform an action
Some of the options in the poll above are choosing which action to take (eg raise an error if the version of Python in the target environment is above the upper bound).
OK. My instinct is that some of the options expect to be able to take an action before the full resolution is complete (“immediately error out”, for example, suggests this). But the devil is in the details, and I haven’t really thought through all the details of what any of this would mean in terms of pip’s resolution algorithm (because doing that would take quite some time…)
Editable metadata could be just fine with Option 1 too, as I pointed out. You could even require that a Python cap can only be made by editing the metadata. The issue I have with solution 2 is that it normalizes the behavior of making version caps on Python by trying to support it. Many of the problems it causes (especially for locking package managers) are not solvable. There are half a million PyPI developers, and they are not going to read the sorts of discussions we are having here. They are just going to see that capping is now supported, they are going to think “hey, this means I don’t have to support Python 3.10 right away”, and are going to cap. That’s not what it means, even under Option 2 - you can’t down-solve your Python version. I don’t think forcing errors for upper bounds (which is what you are doing, no one is arguing for solving using the upper bounds) is important enough to add to the system, any more than erroring because Windows is detected is important enough to add. It’s better to leave this up to the packages that really need it to implement themselves, via special dependencies, adding errors in setup.py, etc.
I’m trying to mostly stay neutral on the options, but I am rather not liking Option 2, because it’s giving users a sharp knife they think they want that will be very dangerous for most of them. Even with perfect, back-fixed metadata, Option 2 doesn’t really “add” anything at all to the solve, other than nicer error messages that can be obtained another way, and already must be obtained another way. Option 1 fixes the meaning of the field to match why it was added. Trove classifiers are there for “known to be supported”, and that’s fine.
This has been around for years, and for some reason, just recently have Numba and SciPy suddenly decided to start using Requires-Python for upper caps, even thought that’s now causing worse failures and/or workarounds.
I think the key issue here is this field is used by the solver. And the solver does not need to know about upper caps on Python, because it can’t solve for Python. (And even if it could, I wouldn’t want it to, because python version is important enough I want exact control over it - that’s why I almost always pin Python in environment.yml). Lower bounds are useful, because it can back-solve. But you virtually never back solve for an upper bound on Python.
Every single one of those systems doesn’t use our metadata anyway - the dependency names may not match, they have their own systems for limits, etc. Conda, homebew, etc. all run Python migrations exactly the same way: they just start at the top of the dependency chain, try to build Python 3.x version of the package, then keep going unless something visibly fails. Adding metadata-based failures here would slow the process down, not speed it up or help it in any way.
Also, every one of those does have some system to keep Python packages separate by version, which alone breaks the symmetry compared to other packages.
The conda-forge system is nothing like what is being proposed here. This proposal is to allow all maintainers to individually edit their own metadata for all time. Conda-forge packages are also immutable. Conda-forge’s system is a central pinning repository that collects known breakages; it’s a single addition, and it’s done by central maintainers, not individual package authors. There are no worries about security, maintainers being able to not break historical versions of packages, etc. (I’m not completely sure this is accurate, since I haven’t had to mess with modifying metadata much, but I do know the entire design of conda-forge is around maintainers who know packaging working on it, rather than package authors, who often hate packaging, like PyPI.)
I’m not saying it would not be useful, but it won’t solve the same problems in the same way.
Upper caps are bad for regular packages too, not just Python. For example, let’s say IPython 7.19 is out, and then Jedi 0.20 is found incompatible with it. So IPython 7.20 has a cap on Jedi < 0.20. Fine, Pip will probably solve IPython 7.20 and Jedi 0.19. Then let’s say you add another package with
jedi>=0.20. A “smart” solver will be too smart for it’s own good - it will back solve for IPython 7.19 and Jedi 0.20 - it thinks it avoided a dependency conflict, but there should have been one! This is what it’s doing for Python, too - it’s looking back and finding an uncapped / looser cap.
This is technically correct, and is why PDM/Peotry do it, and conda does it, and now Pip is doing it too. But for a user, it ends up with worse practical solves. The workaround you propose for SciPy with a “breaking” uncapped SDist will end up messing with this, too.
(side note) The “solution” being pushed by some is to always cap, since this problem is caused by “tightening” a cap, monotonically increasing caps avoid this problem - but that’s 100% dependent on SemVer being an accurate predictor of errors. If pyparsing 3.0.5 is broken, you are out of luck again, since you capped 3.1 or 4 instead - caps must monotonically increase or you are back to square one. Plus if you are wrong, you now are creating dependency solver errors where none existed. And you are forced to make frequent updates. And you have to maintains old major versions, because if you cap, that means you should be expected to be capped, too. And… (see rest of post).
Python is special, especially for locking solvers; they are trying to conceptually solve this problem assuming perfect metadata and assuming a target range of Pythons. Lower bounds on Python are pretty safe to consider accurate, but upper bounds are already not accurate; there are 346,823 projects with 3,108,221 releases that mostly have incorrect upper bounds (many of them are not even knowable yet, might be 3.12 or 3.19). Option 1 just says let’s not use Requires-Python for this at all.
I believe we are up to at least three proposals now: Editable metadata, A way to avoid SDists trying to install if a wheel is missing, and this one. And I might have missed one. I’d rather try to keep them separate, other than acknowledging that these might come along some day, so that one doesn’t have to wait on the other. I also don’t plan to push the editable metadata proposal forward, someone else will have to pick it up if they want it - not that it’s not useful, but it’s a huge undertaking.
The SDist one is tricky, but it looks like @rgommers might already have something for that, so that can be moved forward there.
Is there a PEP that explains what “requires-python” is supposed to mean? Because the poetry problem goes away if you can convince them to use it to mean “a restriction on the supported versions of Python” rather than “the range of versions that must be supported”.
Yes, that’s mentioned at the top. PEP 345 & PyPA Core Metadata specification state “This field specifies the Python version(s) that the distribution is guaranteed to be compatible with.”. This would only be compatible with Option 3 - if this is a guarantee, then you can’t guarantee the future so you have to ignore it when solving.
And I don’t think that affects them. There needs to be two values here, or this needs to be disconnected from the solve. A library’s metadata should not be dictated unconditionally by the current lock file.
I completely agree with you. I find it really confusing that poetry has the fields side-by-side:
[tool.poetry.dependencies] python = '>=3.7, <3.11' numpy = '>=1.20' scipy = '>=1.5'
The first requirement says is a “for all” requirement, and all of the others are “there exists”.
All of the values “tool.poetry.dependencies” should be “there exists”, and there should be a separate field somewhere else that does the “for all”, although I like your idea above of omitting it in most cases and letting Poetry figure it out as the intersection of the requirements specified by all of the depenencies.
I’ve seen a few people mentioning apt / Linux distros / etc. here. I’m nowhere near as much of a Python packaging expert as other contributors to this thread, but I do have lots of Debian packaging experience so I thought I’d offer some context on that. It’s true that
Requires-Python turns into “just another dependency” in apt terms (although it might have to be manually transcribed by the packager - I don’t know of anything that would do it automatically), but as an isolated statement this is potentially misleading and needs some more specifics.
The consequences of upper bounds on Python with apt are indeed likely to be less bad than they are with pip fetching from PyPI: typically the result would be either (1) dpkg/apt would “deconfigure” the package with the upper bound in order to upgrade Python, and then unpack and configure a version with a weaker or absent upper bound, (2) apt would decide to automatically remove the package with the upper bound if it can’t find a better solution, or (3) apt would bail out with an error and require the user to resolve things, perhaps because the consequences of removing the package with the upper bound are too bad in some way.
However, apt-based distributions (certainly Debian and Ubuntu) don’t remotely try to provide an inventory of versions that you might choose to install that corresponds to the complete upstream history, for any package. The set of available versions is typically zero or one per line in
/etc/apt/sources.list (there’s no technical restriction on more being available, and it can be different in third-party archives, but Debian and Ubuntu’s archive management tools generally arrange for there to be at most one version of a package per suite+architecture). So
Depends: python3 (<< 3.10) in a package built for a Debian release that defaults to Python 3.10 doesn’t in practice mean that apt will try to downgrade to Python 3.9 and sort everything out: firstly, Python 3.9 probably won’t even be available to apt, and secondly, even if it were, we’re operating in a system-wide flat system here, and the chances of finding a valid solution with an older version of Python than your distribution provides are negligible anyway.
Upper bounds also complicate apt’s job in finding solutions, which is already extremely difficult given the large dependency graph for a complete OS, so generally speaking we only add them when we know that a given version of a dependency will definitely break the package with the dependency - for Python packages this is normally just used for binary packages with extensions built for say Python 3.9, which might get something like
Depends: python3 (>= 3.9), python3 (<< 3.10). But this is really more like Python tags in wheels than it is like
At best, tight upper bounds can serve as a release management hint (effectively “dear Debian release team, don’t release with Python 3.10 until you also have a numpy that works with it”), since we try to ensure that we have a suite of packages that remains dependency-consistent throughout. However, preemptive upper bounds are a rather big and inflexible hammer for that job. At Debian’s scale we normally prefer to reserve that hammer for cases where we know it’ll be needed, as otherwise we end up getting ourselves into giant interlocked tangles where we’re waiting for dozens of package maintainers to do things before we can make forward progress.
For the sort of case where a dependency turns out to break something that depends on it in a way that we couldn’t 100% predict in advance, we’d be more likely to declare the problem the other way round: the new version of the dependency would declare
Breaks: depending-package (<< first-fixed-version). That’s useful for release management purposes, and it allows apt to refuse to upgrade the dependency unless it has a solution that deals with the broken package, without having to preemptively declare tight upper bounds that might be non-issues. I don’t think there’s any analogue for this in Python’s own dependency system.
I know this isn’t all completely applicable to pip or other Python solvers, but I hope it’s somewhat useful anyway.
The next step for me is to make a PR to packaging to add set intersection - this is needed for any of the three solutions here, as well as helping nox, cibuildwheel, and probably other packages that want to query the requires-python setting with specific questions, like “is
3.9.* supported”. It’s not trivial to compute properly, so it will probably be a little while before I do that - afterwards, we can revisit this and see which solution is preferred.
There is something I hadn’t thought about, but was highlighted by numba/llvmlite: they have an RC release for Python 3.10. So the “correct” solution (before approxomaly Monday) for them might be for Pip on Python 3.10 to automatically get 0.55rc1 (and the matching rc for llvmlite) if using Python 3.10, not to scroll back to some old version or even fail.
I’m really wondering if solvers should be directional, that is, never look for older versions of packages with “higher” upper bounds. For most cases, that is probably much better, especially when metadata is immutable, so users cannot “fix” old releases. This is clearly the case with upper bounds on Python, but it’s really usually true for upper bounds on anything, unless there’s an LTS release, which are rather uncommon.
I’m only just catching up with this thread - how much of the primary problem would be solved by just releasing a wheel tagged for the unsupported version that only contains a specific error message? e.g:
Lib/site-packages/numba.py: > raise ImportError("numba is currently unsupported on Python 3.11. " "Please use Python 3.10 or earlier, or specify --pre " "to install our current prerelease build. " "Visit <our URL here> for updates on new releases.")
You’d have this as a totally separate repo and carefully only release wheels that are not supported, so that users can explicitly
--no-binary to build from source (which I find myself doing from time to time, so would like to keep it this simple rather than having to patch the sdist first), but most are going to quickly get the wheel with the explanation.
It’s obviously not as ideal as failing at the resolution stage (and I think my selector packages idea would be a better approach for solving this kind of edge case there, as well as the others), but what I propose above would work today with no changes to any tooling.
(Oh, and I voted for “don’t support upper caps in resolvers, and warn about it in build tools” earlier, but you probably could have guessed that from my proposal )
I believe that’s (at least almost) in my original post:
This is based on a similar idea proposed for removing manylinux1 support. The only difference is that this one, by depending on a package with an error-raising SDist, is that you get the error during install, rather than later when you are trying to use the package. To the best of my knowledge, this does a better job of solving most of the issues authors face with unsupported Python versions, and it’s reactive instead of proactive - you can’t “not support” Python 3.11 until 3.11 is far along enough for PyPI to support to support wheel uploads, so you can actually test and see if you really don’t support it before breaking it. It’s also not overly easy; a simple limit will tempt users who don’t write complicated packages to limit Python support.
I guess I’d forgotten the earliest suggestions 60-odd posts later
The problem is that an error raising sdist actively prevents people from testing your package, for example, if you believe it’s incompatible because of a core CPython bug (likely). Core devs are not going to jump through that many hoops to test your package, especially if we ever get around to doing the automated testing we keep thinking about. Waking it so the sdist is unusable is unnecessarily restrictive.
That should be a fairly easy change to make to PyPI. I’m pretty sure they only block version tags to prevent abuse and/or user error. It’s certainly easier than changing the current definition (de facto or otherwise) of the Requires-Python field.
It’s not entirely clear, but I think you’re suggesting this is a good thing? I certainly think it’s a good thing. We don’t want to make it too easy for every random package to fail to install, but it should be possible for aware/active/thoughtful developers to help their users fail quickly with helpful guidance for known (and monitored) scenarios.
No, no, that’s suggestion 2. What I was suggesting is, to give a concrete example:
NumPy 1.23 releases before Python 3.11 tags are allowed on PyPy / before Python 3.11 is ABI stable. They release the normal set of wheels, and a normal SDist. They don’t limit Python-Requires to <3.11, since we discourage that (option 1).
Then Python 3.11 becomes ABI stable - the 3.11 tag is allowed on PyPI. NumPy tests the most recent release to see if 3.11 is supported. Sadly, they haven’t been testing alphas and betas and so there are several problems with NumPy 1.23 on Python 3.11rc1. So they upload a new set of wheels for 3.11, such as
numpy-1.22.0-cp311-cp311-manylinux2014_x86_64.whl, which are empty and just contain a dependency on
break-me-if-python-is-too-new or something like that - that package is where the “broken SDist” is. That way, “normal” users get a nicer error, which is what they want. Dependencies are specified per file, not per package.
If you build a non-released version, or just from source, you don’t get this error.
That should be a fairly easy change to make to PyPI.
Nothing needs to change for PyPI. Not allowing 3.11 wheels until 3.11 is ABI stable is perfect.
I think you’re suggesting this is a good thing?
Yes. That’s the problem with option 2 - it makes it really easy to just cap Python without knowing if it’s broken.