I know this is a joke but I want to emphasise that this (both the workload and the blame games) affects more than PyPA. Try [project.dont_blame_pypa_or_brew_or_linux_distros_or_any_other_packagers_or_repackagers_or_end_user_packaging_also_since_this_doesnt_translate_to_repackagers_it_is_unusable_for_anything_likely_to_be_installed_via_conda_or_system_package_managers_so_dont_use_for_webservers_or_any_domain_with_a_significant_proportion_of_conda_users.default-optional-dependencies]
.

And we’re not really worried that major ecosystem components like
packaging
orattrs
would start using this without thinking about and testing the results, right?
Does it matter if it’s only the major ones? What proportion of your dependency trees would you consider so major as to be above packaging mishaps [1]? How many packages with extras do you know that test both with and without the extras (I know one and I’m the one who insisted on putting the test there!)?

I want to stress that this goes beyond installing “nice to have” dependencies. It affects the required dependencies, like Qt, that come in different flavors.
I don’t understand why the answer to interchangeable dependencies isn’t just a runtime check?
try:
import foo
except ImportError:
try:
import bar
except ImportError:
raise NoBackendError("Helpful instructions here...")
No install time guess work required and is compatible with every package management system I know of . Better yet, if you can make the backend selection explicit in the way the library is used (e.g. make the user type
from qt_agnostic_library.pyqt5 import ThingViewer
) then you can not only make the error more precise but also greatly reduce the reproducibility, mutual exclusivity and testing pains I wrote about before [2].

Then I’d have to write code so a
ModuleNotFoundError
tells them to install it again, but this time with the CLI extra.
Is this really such a burden to users? They have to run a command that they probably just ran <30 seconds ago but with [cli]
appended.

I’d also have to explain to them that just copying the big “pip install pkg” from pypi.org and pasting it into their terminal will result in a broken install.
Well lets propose to make this message customisable or list all options or possibly even remove it if the decision is sufficiently nuanced to need a proper explanation?

I could take upon the burden myself and publish two packages, build two pipelines, version two packages independently while also trying to keep them synced.
This is getting heavily into perspective driven territory but if keeping the two in sync is anything more than a minimum version constraint and a test run on that minimum version then I’d say that the library has stability/usability issues that prevent it from really being a suitable library. I actually find this splitting process improves the library since it forces you to really see the usability of the library’s public API from a consumer’s perspective.
Specifically for this go-to astropy
example [3], I’m not an astropy
user but I have had to work out packaging-themed issues within astropy
based projects on behalf or real astropy
users. What really strikes me is that the issues all stem from the unfortunate choice to stuff an entire domain of science into one single PyPI package. You can get a sense by looking in their API table of contents at the huge range of functionality for core data structures, specific (independent) types of calculations or analysis, visualisations as well as IO for umpteen different file formats. This results in the awkward dependency situation but it also means that the installation footprint is 40MB without even including the dependencies which the user has to pay for even if they only want to do one thing.
This monopackage pattern is something I desperately want to see less of. It leads to these dependency issues. It encourages people to do crazy things like rm -rf
-ing bits of site-packages
[4]. Such packages almost certainly increase in size with each release so I even see people deliberately try to lock themselves to as out-of-date-as-possible versions just to get their deployments sizes down. It makes the contributor side miserable since you have to read a book, learn about some build system then run an insanely long build+test [5] just to submit a patch that was quite likely in a pure Python part of the code base. Splitting astropy
into a tree of single-function packages would solve every single one of these issues. I know modularisation comes with extra baggage (version management [6], cross project documentation and navigation, occasionally duplicating helpers, the time it takes to do the split itself) but it brings so many benefits whereas this PEP can only solve (or rather hide) the dependencies issue from beginners (at the expense of making it worse for everyone else).
it only needs one that isn’t ↩︎
grep for Qt ↩︎
I was wary of touching this because I was worried that it would read this as a dig at astropy. I promise that it isn’t intended that way ↩︎
Yes, I really saw this happen. The issues it caused got quite a long way from the offending developer due to heavy usage of lazy importing in libraries (also an unsavoury side effect of monopackages) ↩︎
think it was about 48 hours last time I ran scipy’s tests, not something I’d be keen to come back to ↩︎
lower version bounds testing solves this much easier than you’d expect ↩︎