Attitude to changes in Python (was: nicer interface for str.translate)

I’m a bit worried about the general attitude on this forum. The bias is heavily against making any improvement it seems.

For example, for Nicer interface for str.translate it should be a non-issue to make this function raise on invalid keys. This would literally only break already broken code and save 99% of devs from making the trivial mistake.

Try being a maintainer of a project that gets tens of millions of downloads per month, complaints about any change that causes any perceived or real issue, as well as requests like this constantly for multiple years, along with a huge backlog of other things you need to focus on too, then re-evaluate your statement. The bias is towards making changes that enable others to write code to do what they want. Not everything needs to be in the standard library or implemented in exactly the way you want.


Relevant XKCD


Somewhere in there must be my only time to use str.translate in any significant code:

I have used it a few times but I agree that it is not very intuitive or not exactly what is wanted. In the linked issue I didn’t really want to “translate” anything but just to remove some set of characters from a string. In fact I didn’t even want to do that: really I wanted to count the number of characters that were not in some set of characters but translate must have been the closest thing that I found at the time. I probably just looked through dir('') to see what might be of use but it’s an odd mix of possibilities:

capitalize casefold center
 count encode endswith expandtabs find format format_map index isalnum isalpha
 isascii isdecimal isdigit isidentifier islower isnumeric isprintable isspace
 istitle isupper join ljust lower lstrip maketrans partition removeprefix
 removesuffix replace rfind rindex rjust rpartition rsplit rstrip split
 splitlines startswith strip swapcase title translate upper zfill

Out of these 44 methods I count about 11 that I use regularly. Most I never use. That could be an argument for adding more methods (why not? - the interface is hardly minimal). I am sure though that if I was maintaining the str class I would see it the other way around: we already maintain loads of methods that hardly anyone ever uses, please don’t add more.

This sort of thing is often said on these lists but frankly it is misleading at best. Firstly in the context above I wanted something that I could use in SymPy. Currently SymPy has precisely one hard dependency and that is mpmath which in turn has no dependencies and which is absolutely integral to SymPy’s core functionality. There is zero chance SymPy would add a dependency on some library to get this functionality even if it was pure Python (in which case it wouldn’t help anyway). Either this is in the stdlib or SymPy won’t use it.

Secondly a library with this functionality cannot become popular unless it has a lot more to offer. Even just for use in a single individual project that was not shared with anyone else I wouldn’t want to add a dependency for this. If the library had a lot more functionality to be of general use (e.g. like boost in C++) then maybe I would use it for some things. Otherwise though I don’t see how any library like this could become popular and then if it did the functionality would not “end up in Python core” because then the reply would be: just pip install python-boost if you want that function.

There is definitely a catch 22 here: proposals for additions to the stdlib are met with the suggestion to make a 3rd party package to see if it gains popularity. However I would never install that package but I might use the functionality if it was in the stdlib.


But by that argument literally every dependency needs to be in the stdlib, doesn’t it?

1 Like

No; the catch-22 is that something simple like a string function will never become a popular package, because existing popular packages would never add such a trivial dependency. Therefore, anything sufficiently simple can never pass the “popular 3rd-party package” threshold for inclusion in the stdlib.


If that were the ONLY argument for inclusion, then you’d be correct: a lot of trivial packages would never gain currency on PyPI. But fortunately, it isn’t. There are other valid arguments, including “all of these popular projects have created ad-hoc versions of this, slightly incompatible, and it causes confusion”. Also “this is a good target for compile-time optimization” (as we’re seeing with dedent, although that’s already in the stdlib so it’s not quite equivalent).

But I don’t think this is a problem. A trivial package that everything depends on is a vulnerability - just ask the JavaScript folks about that. So if it’s so trivial that you feel it’s not worth pulling in a third-party dependency… it probably is! And that’s where “recipes” come in. We have some of those in the Python docs already (eg itertools), and sometimes, when there’s a coherent collection of them, you end up with a useful set of tools (eg more-itertools), So that’s a different path to success: start by offering something for people to copy-and-paste, and then see how useful/successful it is.

Exactly and if anyone would like to disagree with this point then please begin with examples:

Give an example when a small feature in a third party library was subsequently included as a change/addition in the stdlib based on its popularity in the 3rd party context.


Yeah, this didn’t go anywhere good.