Have pip warn when installing for a Python version that is not covered by the Trove classifiers?

Yes, that would be a great idea.

pip will simply install pure Python packages marked with “py3” in their wheel names and you don’t get any feedback about whether these were actually tested with the target Python version.

Packages should probably advertise support explicitly via the Classifier: Programming Language :: Python :: 3.xx classifiers, since they typically only define a minimum Python version as part of their Requires-Python: meta data.

2 Likes

The problem is that it was never clear what setting this metadata entails exactly. Would installation programs such as pip rely on it to decide whether to accept installing a project or not?

I suppose the answer is obvious to packaging experts, but not to common developers IMHO.

2 Likes

I think it would be good to have pip print out a notice or warning if the classifier is not set in the package’s metadata when installing it or when showing the installed packages.

The community would then slowly learn to use them more frequently and pay attention to the them.

pip could also get an option to turn this notice/warning into an error, so that people can decide for themselves whether they only want tested packages installed.

But this is getting off-topic. Perhaps we ought to start a separate discussion.

1 Like

As a packaging person, I would be a hard -1 on this idea. We already have requires-python which is used and can be optionally ignored by users.

If you perhaps wanted a PEP that amended 621 to require that backends error if a classifier is not contained within the described requires-python, then I would be in favor.

3 Likes

requires-python typically only defines the lower bound. I know it can define an upper bound, but it’s discouraged, and once you’ve made a release with no upper bound, there’s no point adding it later, as pip will end up installing an old release for a newer Python.

So you can’t say requires-python >= 3.8 and safely have it understood that both 3.11 and 3.12 are tested and supported.

Just today I found a package with requires-python > = 3.7 that is incompatible with 3.12 due to the removal of ssl.match_hostname. But the package made no claims to support 3.12 yet because it only has classifiers for 3.7-3.11.

I’m also -1 on this. Trove classifiers were always optional and this would make them required, otherwise people would fill bug issues to fix it.

Any package that does not explicitley defines support (via classifier e.g.) for a not yet final Python Interpreter should be considered not supported for end users. I’d sooner be open to adding to project table a max-python field (defaulting the latest stable release) than using the classifiers.

3 Likes

I’m -1 on things that allow preemptively preventing installs on newer Python. Life would be terrible if most people put upper constraints on requires-python.

That said, I wish it was easier to plug code like this into package managers. I wrote something very much like this for doing a Python upgrade at work recently: heuristics around classifier history, wheel tags, etc to determine if I needed to upgrade a given package.

4 Likes

See Can pyreadiness look at more than the classifiers? · Issue #18 · di/pyreadiness · GitHub for more discussion about using classifiers vs requires-python to indicate version support. I go into more detail there.

I am absolutely against having pip warn if classifiers aren’t met when installing, because I’m against adding the classifiers to Flask and my other projects. The version classifiers are noisy, requiring new releases just to update metadata, even if nothing else changes. Please, please do not do this, it will cause endless issues to be opened against my projects.

6 Likes

I am also -1 on this. Classifiers (IMO) have always described what the project supports - they are information for the user to consider, not for tools to enforce unquestioningly. The lack of a classifier says “we don’t support Python X.Y”, not “this project doesn’t work on Python X.Y”. The distinction is important - plenty of people are perfectly happy to work with “use at your own risk” configurations, and we need to continue supporting such usage.

As a side point, surely the fact that many projects have a Python :: 3 classifier makes this pointless anyway? Or are we saying that a Python :: 3 classifier somehow “doesn’t count”?

If someone was bothered enough to do this, I see no problem with it. For that matter, I think it would be perfectly reasonable for a build backend to warn in this case, even without a standard. In fact, I don’t particularly think that failing would be unreasonable, although that would be a judgement for the individual backend to make - some backends may prefer to be stricter than others.

6 Likes

It doesn’t even say this, all it means is “we haven’t made a release since we last changed the list of classifiers.” The supported version of Flask (the latest release) absolutely supports Python 3.8 - 3.12, even though it doesn’t have any of those classifiers. This is indicated by requires-python = ">=3.8" and the fact that your tests will still pass.

3 Likes

So true! I find that part of releasing so tedious.

Wow, I never thought to just not use them… I’ll be removing them in every project that I own now. Thank you David!

3 Likes

Be aware that humans look at projects on PyPI and see classifiers as a sign of active maintenance with a specific version being listed implying “we tested on this version in our CI”.

Keeping classifiers up to date can be interpreted as a positive sign of project health.

9 Likes

Python :: 3 is in contrast to Python :: 2 rather than x.y classifiers. I take it to mean “I test and support some Python 3 version”. The specific x.y classifiers are more useful.

I could be totally off base but I think the number of users that go to PyPI and look at Python version classifiers would be less than 0.01%, probably less than that even. Users first look at when the last release was followed by activity on the repository.

edit: obligatory meme

4 Likes

Is there any possible scenario where x.y classifiers could be officially deprecated/discouraged? Otherwise we’ll keep having a split ecosystem where some authors use it while others don’t, which makes it confusing for users. PyPI already shows the Requires: Python on the site.

They being said, for non abi3 wheel built packages, often times the trove classifiers communicate important information: “Have we built and published PyPI wheels for 3.x yet”.

For consumers of a library, testing with a target Python version might be a red herring. Wouldn’t you want to test against all dependencies when they’re updated? Of course, often Python is the most complex of your dependencies, and updating it is, sadly, a big deal. But IMO, what you want here is pinning all the deps, and end-to-end tests that gate each upgrade, not just a new Python.

For maintainers of a library, I see two cases. Some would hate having to release a new version just because a new Python came out (incidentally, this is where stable ABI helps for compiled extensions).
But others want to make sure users are warned/blocked if they use an untested configuration. And that’s not easy to do today.
Am I reading the issue right?

That is important info, but you can also look at the list of wheels. Perhaps PyPI could somehow summarize that info and display it more prominently.

2 Likes

Perhaps PyPI could somehow summarize that info and display it more prominently.

Yeah, I think we could remove 3.x classifiers if PyPI had a table/matrix of the wheels that were published for a package (as it’s currently to a good user experience to hunt through and decipher this information yourself by reading the list of files.

For example, try to parse what wheels are and aren’t supported from a package like pydantic-core

I think a simple table that listed wheels per Python interpreter and removed any blank rows and columns would be a great way to summarize this information irrespective of what we do with 3.x troves. For example lxml only has universal macOS wheels (e.g., supports ARM) for 3.11 and 3.12, but supports x86-only for older 3.x. This is very hard to decipher and debug by looking at just the list of wheels.

A quick and dirty draft of the idea:

Python Version win_amd64 macosx_11_0_universal2 manylinux_2_28_x86_64
CPython 3.8 :white_check_mark: :white_check_mark: :white_check_mark:
CPython 3.9 :white_check_mark: :white_check_mark:
PyPy 3.9 7.3 :white_check_mark:

And if the package has no binary wheels at all, then this table is omitted and maybe just has a badge that says something like “Pure Python”/“All platforms”, etc.

7 Likes

And I suggest a drop-down to select between the two most recent versions of the package (as PyPI doesn’t know if there are more files to be uploaded).

While you’re at it, as the most recent pre-release to the drop-down, if it’s newer than the most recent release, so users can test with a new version of Python (or even a new version of the package).

Perhaps older versions will need a table somewhere so users can find the newest version which supports an old version of Python.

That only covers binary wheels, though. For pure Python projects you wouldn’t be able to get the same information based simply on the uploaded wheels which is where Trove classifiers as a signal of tested support have historically come in.

I personally have no opinion on this, although I have argued in the past for dropping these sorts of classifiers as they are easy to forget about.

3 Likes

I tend to ignore these classifiers precisely because I don’t think of them as something a package author is likely or even expected to keep scrupulously up to date. Not everyone has a convenient way of testing every release of a package against a full spectrum of Python versions, let alone a full matrix of platforms, Python versions, and dependency versions. There are many small, pure-Python packages which probably release without testing any version except "whatever the developer was using at the time they made the release"[1]In my experience it’s not uncommon for a package to silently and perhaps inadvertently “drop” support for an old Python version because, e.g., a developer made a change that uses some syntax and didn’t realize that syntax wasn’t available on some old Python version that previously worked.

I’d be a hard -1 on having pip complain simply because the package author didn’t manually specify certain version classifiers in the metadata. I would see that as a rather sudden increase in what is expected of package authors. Apropos of the other thread about a hypothetical build farm, I think the only case where I’d be okay with a warning is if there were some kind of build/test farm, so that we could actually automatically test a package on upload, and then warn in cases where we know for sure the test failed on a particular version.


  1. and sometimes not even that! :slight_smile: ↩︎

1 Like