How do we want to manage additions/removals to the stdlib?

I purposefully don’t want to get into a discussion about what should be removed from the stdlib or how much should be there as that’s not the point of this topic. I would prefer to stay focused on the mechanisms used when a module is proposed to be added or removed.

So is your suggestion then to require a public discussion somewhere for any module proposed for deprecation?


PyPI already has that via Search results · PyPI , but I bet no one pays attention to it.

The problem is that the Development Status tag is a property of individual releases, and package transitions from “mature” to “unmaintained” are usually observed in retrospect - and almost never followed up uploading a .post1 version to update the development status. In many cases the maintainer is missing and it’s unclear whether anyone could upload a new version.

1 Like

I strongly support slower additions to the standard library, ideally with a period of usage on PyPI or some other way to get extensive feedback from users.

I’ve had plenty of issues with early versions of the typing module, where features were shipped in patch versions of 3.5 and kept changing fast through to 3.7. I’ve also seen plenty of commentary that asyncio would take on more of trio's design decisions, but is strongly constrained by the backwards-compatibility constraints of living in the stdlib.

Give new modules a chance to learn what breaking changes they want or need outside the stdlib first!

I separately support testing-main-against-PyPI-packages, for what that’s worth - the standard library is not representative of the Python ecosystem and if resources can be found I think this would be valuable.


Yes. Direct users to a specific thread.

That thread would ideally link to reasoning or provide users with reasoning for deprecation.

If the reasoning is based on lack of funding for maintenance, provide users with a clear path for them to help fund maintenance.


Agreed on this point. In general, I think the model of “develop a package for stdlib adoption on PyPI, and then propose it for stdlib inclusion” is a good one. The problem with provisional modules is that users of older Python versions have no means to get the “improved” version included with later releases, so code that depends on provisional libraries ends up with a greater support burden as a result.

Having said that, and somewhat in contrast, I think we should be more willing to adopt libraries from PyPI into the stdlib. The existing constraints (that the maintainer must be willing, and interested in supporting the package in the stdlib, that the package is stable and can handle the slower pace of stdlib change, etc) should still apply. But we should avoid having an attitude of “if people are happy with it on PyPI, what’s the point in adding it to the stdlib?” - if we combine that with a preference for developing modules on PyPI first, we’re in effect saying that we should never add things to the stdlib at all, and I’m not in agreement with that - I think the stdlib does need to grow and change over time, or it will die.

I know that the 2 quoted libraries (typing and asyncio) were tightly linked to language features. I don’t think that’s necessarily sufficient justification - it has been in the past, but we should try harder to separate language changes from the supporting libraries, and a policy that the library has to be developed on PyPI would help enforce that.

Edit: On reflection, this is more about “provisional” status. Libraries like statistics and secrets (and personally, I’d include graphlib, although I know there’s some controversy on that one) were built directly for the stdlib, and generally there’s not been much pushback on those. They have broadly been successful, and useful, additions to the stdlib. So let me modify my position slightly, I think that provisional status should be replaced with requiring the library to be developed on PyPI until it’s ready for (non-provisional) inclusion in the stdlib. And I don’t think we need a “half-way” mechanism of “3rd-party but installed by default” - if a library isn’t good enough for early adopters to get it from PyPI, it’s probably not good enough for the stdlib.


This type of discussion does not scale and are a sure way to burn out developers. One the one side it’s typically one or a couple of core devs who have to maintain the code. On the other side there are lots of users that want to keep the code in the stdlib, because it makes their lives easier and ensures long-term maintenance.


Having a broad discussion where you direct every potential user of a module to discuss its removal is not a good idea IMO, especially if you think you can have nuanced discussions like “how do we maintain this library going forward”, funding in that location.

We’ve done this for certain things in pip, and… overall, these become difficult discussions to manage, are a LOT of work and usually don’t go anywhere useful before burning out the maintainers.

It is way too easy to get swamped by users, even if it’s a small number of them that are noisy; it’s a moderation pain since a lot of the folks aren’t regular participants in such discussions and that will show; it’s difficult to have a direction to the conversation since folks-who-just-saw-this will keep coming/barging into the discussion without earlier context — it’s non-trivial to have a nuanced conversation like maintainability, funding etc in such threads.

It also seems odd to discuss funding that might be perceived as earmarked for keeping a thing in the standard library. I’ll stay away from that whole conversation beyond saying: making the addition and removal of modules a more gradual process than yes/no across Python versions can help with migration pains, which is what people who want to throw money in these situations are likely worried about (although it comes at the cost of additional complexity).

On the other hand, getting users to provide inputs at a single point is certainly a useful communication tool — we’ve definitely done this for certain changes in pip. Certain messages from pip have a link to the discussion on the issue tracker. I like to think that we’ve learnt about when that’s a bad idea and when it’s useful — i.e. we’ve gotten better at this — and don’t make the same mistakes anymore in terms of the change management (there has been more than one instance of the issue tracker being basically DDOS’d by the users who got such messages).

That said, even the cases where we’ve actually had it go well (eg: the current round of distutils → sysconfig migration warnings), it has involved a bunch of planning, using the messages as a way to communicate prior to deprecation, having a maintainer actually stay on top of every message (which is a lot of work), writing up StackOverflow answers, providing detailed information when the user first arrives to the discussion and making adaptive releases as we identify actionable chunks. That’s expensive, in terms of how much time and energy it takes from the maintainers, and this was for something that was specifically scoped as “does not affect you now”.


I know what sort of things have really hurt my ability to contribute to
the stdlib, and what sort of things have really damaged my ability to
care. But I honestly cannot imagine that having somebody volunter to
take over maintenance, or offering to pay me to contribute, would make
me burn out faster.

Do you have objective evidence that giving the community a clear path to
rescue a stdlib module that has not been maintained rather than removing
it would cause developer burnout?

1 Like

The issues around funding for open source projects are complex and

sometimes difficult. We all want our free software to be free like beer,

but somebody is paying for it, if not in money, then in time and effort.

Other projects have tackled the issue of paying people. There are lots

of options, and opinions:

A year ago, Mark Shannon approached the Python-Dev mailing list, the

Steering Council and (I assume) the PSF with a proposal to speed up

Python. I see that he has recently started doing exactly that, so I

presume some arrangement regarding funding was made. Likewise, the PSF

also pays some other people to work on both Python and infrastructure.

So it is a model that really can work for us. (It might not work for

other communities.)

I don’t see why we can’t look at something similar for individual

modules. Something like, "we need X dollars to modernise module Foo, and

another X dollars to guarantee maintenance for two releases. Here’s a

Kickstarter campaign. If you insist that you really, really need this

module, show us by putting a couple of bucks in it."

Or, "we need a new maintainer, or this module is going to be deprecated

with the possibility of removal in the future".

I hear you when you say the pip experience has not been great, but there

are so many cultural and technical issues that we cannot necessarily

extrapolate beyond the pip community. Of course haters gonna hate

whatever you do. But it’s worth looking at ways that we can get more

engagement from the community, rather than just saying we’re removing

a module, if you don’t like it, too bad.

We’re frequently complaining that there are too few developers to

maintain everything. Maybe this is a way to encourage a few more people

to step up. And if they can’t step up, and least put some money in.

"That said, even the cases where we’ve actually had it go well (eg: the

current round of distutils → sysconfig migration warnings), it has

involved a bunch of planning"

Well, yes. Of course it does. Planning what you are going to do is part

of development. If you think planning is expensive, you should try not

planning :slight_smile:

1 Like

Are you aware that typing was, and I think still is, provisional? So
it has a deliberately unstable API?

Maybe we’re not doing a good job of advertising when modules have an
unstable API.

1 Like

How do you feel about private (single underscore) modules in the stdlib?

For example, suppose I was to refactor code from into a
separate file, would it be better to move everything into a package:


or keep the main .py file and add a private module?  # internal stuff

+1 to that.

How would you feel about keeping all provisional modules in a dedicated
provisional namespace?

import provisional.module as module

That might make the warning unnecessary, and make the move from
provisional to stable even more clear.


However, we’re not talking about modules with an active maintainer. We’re talking about modules which the core devs want to remove. So how much would you want to be paid to continue maintaining one of “your” stdlib modules, long after you’d lost interest in it, or maybe your circumstances had changed and you were unable to find the necessary time? Or to maintain a module that you didn’t care about, and nor did any of the other stdlib maintainers? What would happen when the funding dried up - would you be OK with going through the same cycle of starting a discussion, finding someone who is willing to pay, etc.? What if the person paying wants you to make a change you disagree with? Or one that the community disagrees with? Or that violates Python’s backward compatibility rules, for example?

Maybe you’d be fine handling all of that. Personally, dealing with that would burn me out in a very short time. I’m not sure I can even quantify what sort of payment I’d find sufficient to put up with it.

The “clear path” that has already been mentioned is for someone to take the code and maintain it as a 3rd party library on PyPI. It seems to me that the discussion here is based on the assumption that users wanting to keep a module alive won’t go for that (for some reason). So why assume that any other equally clear proposal would be acceptable? (And note that paying money is an extremely unclear proposal - I’ve already mentioned conflict of interest questions, and there’s also the whole area of international tax and contract law to take into consideration - as well as the fact that many core devs already have full-time jobs and may not be able to easily accept payment for additional work).


My comment was targeted on open discussion, not on funding or external maintenance. I should have removed the unrelated paragraph. Sorry for the confusion.

For the record, I’m all in favor for moving modules to external maintainership and getting more funding for core dev.


Neither am I.

(Understand that I’m not specifically referring to myself here.)

Suppose I’m over-stressed and lost interest in “my” module
in the std lib, and haven’t touched it in five years. There are tons of
serious bug reports, nobody wants to work on them, and the Steering
Council decides to depreciate it and maybe remove it.

But before that becomes official, the quantity surveying community
decide that they cannot live without the aardvark module and offer to
crowd fund, say, 3 days a month for two years. Or maybe it’s just one
company or person who offers to fund it.

Or maybe they are flush with cash and decide to fund a full time
position. Remember when folks were all excited about crowd funding Mark
Shannon? By memory, we were talking about half a million there, although
not for maintaining one small module :slight_smile:

Now maybe to people on a five-figure salary with full time work, a
measly three days a month isn’t going to interest them one bit. But
there have certainly been times in my life that even 1 day a month of
paid development work might have helped me rediscover my love of
quantity surveying.

I’m probably not the only Open Source coder who would find their stress
levels go down rather than up if offered some money.

(Anyone who has enough of their needs fulfilled that money ceases to be
a de-stressor is very lucky :slight_smile:

And if not? I can always say No thanks, I’m too busy or otherwise not
interested. Maybe I want a change of career. Maybe I’ve decided to turn
my back on technology. There’s no obligation to say yes.

But there’s surely no harm in giving the community the opportunity to
make the offer to pay, if they care enough. (Perhaps they don’t.)
Maybe one of the other devs will volunteer. Or not.

The PSF pays Łukasz to work through a lot of bug reports that aren’t
being worked on by anyone else. He probably considers that a fair

It is hardly unusual for developers to work on projects that they don’t
personally care about in exchange for money.

That’s two years from now, and the module will be in a much better state
by then.

Maybe my circumstances have changed and I’ve rediscovered my love of the
module, or there are a dozen quantity surveyors lining up to maintain
the module, or a big rock from space has hit the planet and the
survivors have more pressing concerns than the Python stdlib. Who knows?

We don’t have to solve problems for all time to make a temporary
solution worthwhile. Don’t let the perfect be the enemy of the good.

We have plenty of precedent here. There are people who either are, or
have been, paid by their employers to work a certain number of hours on
Python. Guido was (is?) one. I think there are a few Microsoft employees
who are paid to contribute. The rules aren’t different just because they
are being paid.

I’m not assuming anything. John Andersen made a suggestion and I’m
running with it for further discussion.

It seems to me that there are two viewpoints here: those who think that
having “batteries included” is important, and those who don’t.

Both groups agree that having unmaintained modules in the stdlib is a
problem that needs to be solved. The first group are looking for
solutions which would fix that while keeping the module in the stdlib.

(John’s idea was to give the community a better opportunity to step up
and contribute either time or money.)

The second group doesn’t care too much about keeping modules (especially
niche modules) in the stdlib. Some of them might not even want a stdlib
at all, outside of the bare bones needed to bootstrap enough of an
environment to install whatever you want.

So if there is little or no benefit to keeping a module in the stdlib,
then why try to keep it in the stdlib? If a module is unmaintained, push
it out of the stdlib, and the lack of maintenance becomes somebody
else’s problem.

I don’t think those two groups are ever going to be in full agreement,
but I think we should at least allow discussion and debate.

“We can’t allow crowd-funding to pay you to maintain this module,
because some of the other developers who aren’t maintaining it are
already being paid full-time to do other things.”

1 Like

I take your point over most of what you say. Just to note, though, I’m strongly in favour of “batteries included”. And yet, I don’t think a community debate on “options to save the module” is a good idea every time we do agree to remove something. I don’t find that position to be at odds with not wanting to deprecate things any more than we have to.

And then Brett said:

People not noticing that modules are provisional seems to be our
failure. It seems to me that it is too hard to tell whether a module
is provisional or not. Its not obvious.

It looks to me like the only indication that modules are provisional is
the documentation. See for example:

where making a module non-provisional only required doc changes. If
we’re expecting people to read the docs to know whether an API is stable
or unstable, we’re going to be disappointed :frowning:

As for users feeling that a module has been provisional for too long,
surely that’s not their decision to make unless they are the authors of
the module. They can ask (nicely) but they can’t unilaterally make
that decision.

Bringing things back to Steve Dower’s comment about moving provisional
modules to PyPI, that doesn’t solve the problem. The problem is that the
API is unstable. Moving it to PyPI doesn’t make that problem go away.
It’s still an unstable API. It might even change more rapidly because
it isn’t tied to the stdlib release schedule.

So it seems to me that if the problem is that provisional modules in the
stdlib change rapidly, moving them out into PyPI won’t solve that.

1 Like

As I noted previously, the problem with provisional modules in the stdlib is that projects which support multiple versions of Python can’t require users to install the latest version of the module - the project has to write code that works with any (supported) version of the module.

It’s not the pace of change, it’s the lack of tools to manage that change when a module is in the stdlib.


Actually, Mark didn’t ask the SC or the PSF. His post was to try and find someone willing to fund him and then have the PSF manage the money. (In fact, the SC commented on Mark’s proposal and Mark pointed out he actually didn’t directly ask the SC about anything. :grinning_face_with_smiling_eyes:)

Microsoft started paying Mark directly to work with Guido and the rest of the Python performance team at MS.

I’m fine with it as that’s visibly not public. As for a package over top-level, that’s typically a technical decision.

It’s definitely another solution that may help out.

Admittedly, though, my read of people’s responses is there’s more people arguing for dropping the concept of provisional modules than keeping it.

1 Like

Oh wow.

There were a lot of things I was expecting to hear in this discussion but this wasn’t one of them — that pip users/community is not a largely representative sample of CPython users/community — especially followed up by using that as an explanation for how pip’s efforts at change/community management don’t translate in any way whatsoever to CPython.

I’m gonna step away from this conversation now, largely because of one individual.

Between suggesting crowdfunding for standard library maintenance immediately after explaining to me1 about how funding for OSS projects is complex + difficult, using a jest-y tone to seemingly dismiss my point that a proposed discussion style is too expensive (which I can only read as joking that I don’t know that planning takes effort), not answering direct questions about what the expectations are about downstream use of provisional modules, and the use of a generally dismissive tone in replies to me — I don’t have the energy to politely respond to @steven.daprano anymore in this discussion and am gonna step away instead.

1 Based on the fact that it’s a reply to my post + the use of email etiquette on all his responses.

Anyway, I was gonna read through and summarise If Python started moving more code out of the stdlib and into PyPI packages, what technical mechanisms could packaging use to ease that transition? in a few paragraphs, PEP-style, talking about the approaches discussed and trade offs — which someone else can pick up now if they’d like to. I think it’ll be useful, since most folks here aren’t going to be reading that whole discussion to establish a common understanding of the proposal.

IMO it’s the best of all the approaches discussed so far and the only approach that replaces the provisional modules concept — which is a problematic concept, if I’m reading the room correctly.



[quote Brett]

Microsoft started paying Mark directly to work with Guido and the rest of the Python performance team at MS.


Thanks for the update. It’s great to see Microsoft supporting Python so



I’m conflicted, because if my memory serves me correctly, when the idea

of provisional modules was first made on Python-Dev or Python-Ideas, I’m

pretty sure that I was against it. So in a sense it’s nice to be

vindicated by the passage of time :slight_smile:

On the other hand, I don’t exactly see that provisional modules have

been broadly harmful: I don’t see many end users outside of cutting

edge development complaining about provisional modules. And I have

certainly not had any personal bad experiences with them.

I’m not trying to just dismiss the experiences of those who have not

enjoyed the provisional module experience, but:

  1. if we moved them out of the stdlib, I don’t see that their experience

would be any different (an unstable API is still unstable no matter

where it comes from);

  1. it would be nice to hear from the core devs who have been heavily

involved in writing those provisional modules, to hear whether they

think the experiment was a success.

Otherwise we’re only hearing from one side of the equation, the

consumers of those modules. The idea was, as I recall, to make life

simpler for the creators of the modules. Has that been successful or