With PEP 668, a new system for helping prevent users from accidentally overwriting system packages when installing packages from pip was introduced, along with advice to use virtual environments or pipx for installing packages. The difficulty of virtual environments, and how often breaking system packages have actually occurred has been discussed ad nauseam, and the general consensus seems to be:
Most of the time, virtual environments should be used.
Standalone, globally accessible programs should be installed using the system’s package manager.
If not in the system package manager, programs should be installed with pipx.
This is solid advice. It keeps everything separated to avoid breaking system packages. However, it overlooks one issue: What about if the fundamental packages in the package manager are outdated?[1]
By fundamental packages, I mean the packages used for managing other packages, such as pip, or pipx. Currently in Ubuntu 24.04, the repositories have pip version 24.0, and pipx is version 1.4.3. But these are not the most up-to-date versions of those packages (pip’s is 24.3.2, pipx is 1.7.1).
How is a user supposed to keep these packages up-to-date, if they are not being kept up to date in the system repositories, and they are not supposed to install packages with pip?
Personally, I maintain a very minimal set of packages installed with pip, (which installs to the user directory since I don’t install with sudo), normally just pip, pipx, pipenv and maybe a couple other packages that will be used everywhere, I’ve never personally had any issues with this, but that’s not a guarantee that no-one else will, or that it will continue that way for me in the future. Is there a better way for this already, or is this an area that needs more discussion?
Make a venv and add it to $PATH would be the logical and simplest thing to do for tools like pip/pipx. pip should be upgradeable inside any venv it gets used in, and ensurepip can be used to bootstrap pip if that is needed.
There’s also a question of do these packages need to be up to date (vs. being known-working versions at release time), and if your requirements don’t align with the specifications of the system repository, why are you using it?
Your hidden second question is one of those where you will get N+1 opinions from N people, because it depends on all the choices both you and they made about their systems, and will probably give you as much clarity as asking what the best editor for Python is.
It’s the user’s prerogative to opt in to experimental or testing repos with their distro’s package manager, if they want the bleeding edge versions (and everything else that comes with that).
Most popular distros take a more conservative approach, and keep the majority of users on tried and tested, stable versions.
The simplest solution is: don’t worry about it (which sounds like it is pretty much how you’re currently handling things).
As long as you’re using a supported release of a distro that’s intended for use on developer client systems (such as Ubuntu, Fedora, or Debian testing), or even the latest version of a server distro (such as CentOS/RHEL/AlmaLinux, Ubuntu LTS, or Debian stable), they’re going to be new enough to handle bootstrapping installation and execution of other Python ecosystem tooling via pipx or uvx/uv tool.
Where folks get themselves in trouble is when they’re running a server distro that is nearing (or even past) its End of Life date, and trying to use the provided pip directly, rather than just using it to bootstrap a newer pip with python3 -m pip install --ignore-installed --user pip, and then using that newer pip to install other components (bootstrapping a user level install of uv instead of a newer pip is also often a reasonable option these days).
(I know these packages often emit runtime warnings the very second they’re out of date, but that’s really on the redistributors to handle - those runtime warnings are intended for independently updatable components, so it’s reasonable to patch the warnings out when making the redistributed versions, but that doesn’t always happen).
That seems like a pretty good solution for this. Having a single main venv for all packages the packages (and incidentally solves the issue I mentioned before about large packages as well). But it raises the question: Why is that not the behavior that we take by default. Maybe Python/pip should automatically create a “user venv” instead of blocking installing packages with pip altogether? Wouldn’t that be a more streamlined way of handling things that preserves the distro-level packages, but also makes a more seamless transition from what user’s have done for years.
The thing is, using that command still results in the “System Managed Installation” warning, so you still need to use the --break-system-packages flag, as you could technically still end up shadowing system packages in a breaking way (at least, according to this post)
Shadowing system packages with a user install instead of overwriting them shouldn’t be emitting the system package warning (no correctly implemented system tool will pick up the user level install, so you’re not at risk of breaking system utilities)
Really? On every Linux system I know, both ~/.local/bin takes precedence over /usr/bin in $PATH and ~/.local/lib/python3.12/site-packages takes precedence over /usr/lib/python3.12/site-packages in /usr/bin/python -c 'import sys; print(sys.path) so I don’t see how this could possibly not be shadowing system packages.
Just to double check that it was not just my specific Ubuntu install I spun up a quick Docker container, so I had a fresh start of Ubuntu 24.04. Nothing was installed other than python3 and python3-pip, and I created a new user account for myself.
Here’s the dockerfile:
FROM ubuntu:24.04
RUN apt update && apt install sudo python3 python3-pip -y
RUN useradd -ms /bin/bash -G video,users gabe && chown gabe -R /home/gabe
WORKDIR /home/gabe
USER gabe
CMD [ "/bin/bash" ]
Then, I launched it, and ran the command python3 -m pip install --ignore-installed --user pip, and I did indeed get the warning:
gabe@09a05d4977c2:~$ python3 -m pip install --ignore-installed --user pip
error: externally-managed-environment
Ă— This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
Huh, checking further, it looks like Fedora never actually added the EXTERNALLY-MANAGED file - the corresponding change proposal is still pending (see Changes/PythonMarkExternallyManaged - Fedora Project Wiki ). So what I thought was the result of passing --user is in fact a symptom of the external management marker file still being absent entirely.
PEP 668 itself isn’t particularly clear on the expected behaviour of --user when the underlying Python installation is externally managed, since it is implied by user installs not being mentioned in the section on marking externally managed environments rather than explicitly stating that user level installs are just as prohibited as altering the system directories even though only one of those commands requires privilege escalation (and since the topic isn’t directly mentioned, no rationale for that stance is given, either).
The reference to pip install --user --break-system-packages pipx in the non-normative section of the PEP suggests Debian’s current behaviour was an expected outcome, though.
Regardless, a Debian or Ubuntu UX bug report is likely the best way of escalating the problem, since neither Fedora nor conda have actually rolled out the feature yet, and I’m not sure anyone else has either.
I disagree. It’s even called out as a case in the rationale of the PEP and was discussed at length AFAIK when the PEP was being drafted.
Quote below from the rationale, with formatting removed and emphasis mine.
The reason for this is that, as identified above, there are two related problems that risk breaking an externally-managed Python: you can install an incompatible new version of a package system-wide (e.g., with sudo pip install), and you can install one in your user account alone, but in a location that is on the standard Python command’s sys.path (e.g., with pip install --user).
If a system package that uses Python isn’t running in isolated mode, that’s a bug in the system package. (The reverted Fedora change referenced from the PEP wasn’t about running in isolated mode, it was about attempting to move the entire platform Python installation to non-standard locations, which caused problems for building C extensions)
By isolated mode do you mean monkeypatching all the shebangs to use #!/usr/bin/python -P? As far as I’m aware, that’s exclusively a Fedora thing. Not even the other RPM based distributions (RHEL, Alma, OpenSUSE) do that.
I meant -I (isolated mode, equivalent to -sEP), rather than just -P.
There are so many ways to mess up a non-isolated Python runtime that specifically restricting user level installs that may shadow system packages while ignoring all the other ways failing to use isolated mode properly (like reading PYTHONPATH and other user level environment variables) can cause problems seems dubious.