Proper way to do user package installation in a post-668 world?

What’s the right way for NORMAL users to install Python packages now? Weirdos like me who have a dozen different Pythons installed, mostly from source, don’t count; what are regular ordinary people supposed to do? In theory, it’s supposed to be possible to perform user installations, but I’ve run into this problem and don’t know what to advise.

Note: The true use-case for this is my brother’s Debian system, but that’s a messy one that has been upgraded several times and has a ton of other stuff going on, but I can recreate the issue on a tiny Debian VM, so that’s what I’ll be using here.

Install a vanilla Debian Bookworm (current stable, v12.1) using the standard netinst ISO.

rosuav@debian:~$ sudo apt install python3-pip --no-install-recommends
Setting up python3-lib2to3 (3.11.2-3) ...
Setting up python3-distutils (3.11.2-3) ...
Setting up python3-setuptools (66.1.1-1) ...
Setting up python3-wheel (0.38.4-2) ...
Setting up python3-pip (23.0.1+dfsg-1) ...
Processing triggers for man-db (2.11.2-2) ...
rosuav@debian:~$ pip install pepotron
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.
    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.
    See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
rosuav@debian:~$ pip --version
pip 23.0.1 from /usr/lib/python3/dist-packages/pip (python 3.11)

In contrast, here’s what my own system does:

rosuav@sikorsky:~$ python3 -m pip install -U pepotron
Defaulting to user installation because normal site-packages is not writeable
  WARNING: The scripts bpo and pep are installed in '/home/rosuav/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed pepotron-1.0.0
rosuav@sikorsky:~$ python3 -m pip --version
pip 23.1.1 from /home/rosuav/.local/lib/python3.12/site-packages/pip (python 3.12)

(had to specify -U since I already have pepotron installed, but that shouldn’t affect things)

The vanilla system won’t install into the user directory even if explicitly given the --user parameter. How do I reenable user installations? Preferably with the automatic “can’t write to site-packages, doing a user installation instead” logic, which makes it perfectly smooth.

The concern is that user installed libs can cause hard to diagnose
failures for Python-based applications installed from system
packages and then run by the user, since they may end up importing
the user installations of libs instead of versions supplied by the
distro. The expectation is that things will be installed into venvs
and run Python from them, or that the user will use something like
pyenv to install a separate interpreter that isn’t the
system-supplied one and invoke that instead. Or Conda, I guess?

There are of course other (nastier) workarounds like disabling or
bypassing the PEP 668 warnings, but the root of the problem is that
distro package maintainers don’t want users mixing distro packaged
Python and libs with user-installed things from PyPI (or elsewhere
for that matter).

And that’s why it’s bad to install into the system directory. I’m not disputing that. But there is supposed to be a user directory. On my main system, sys.path has both /usr/local/lib/python3.12/site-packages and /home/rosuav/.local/lib/python3.12/site-packages - so the system ones will take precedence if there’s a conflict, but I can happily install stuff into my home directory without problems. And this happens entirely by default.

So why doesn’t this happen with the vanilla Debian install, and how can I achieve this?

1 Like

At least on my Debian systems, ~/.local/lib/$PYVER/site-packages
comes earlier in sys.path than /usr/(local/)lib/$PYVER/dist-packages
which would potentially cause a problem if I pip install --user
something and it ends up shadowing distro-supplied libs.

I’m not sure exactly how to achieve what you’re wanting on Debian,
since long before PEP 668 I’ve been compiling and altinstalling
Python into my homedir and also using venvs for basically everything
I want to install from PyPI anyway. It’s a habit picked up long ago
from RHEL’s long-standing recommendation that you not use the
distro-supplied Python to run anything except distro-packaged
applications, but it seems to apply equally in other distros (doubly
so now that PEP 668 is taking hold).

1 Like

That seems rather user-hostile to me. Virtual environments are NOT sufficient for packages that should provide command-line tools (such as the aforementioned pepotron). User installation should be able to handle this, and it avoids the upgrade problem (since apt is never going to touch packages in ~/.local).

If that’s really how it is, I’m going to recommend overriding the lock and going back to allowing sudo pip install, since the recommended way is worse than that.


I raised this a while ago and no one cares to fix this.

Personally, like you, i have the skill to work around the annoyance.

What I do is maintain one venv that i put on my path and install tools into it.

I will update this later with an example for people not sure what to do.


I use individual venvs like ~/lib/$tool and symlink ~/bin/$tool to
~/lib/$tool/bin/$tool (no need to “activate” these venvs for any of
the command-line tools I run, and there are dozens). Keeping them in
separate venvs means I don’t have to worry about whether they’re
coinstallable, since they can have conflicting requirements and not
affect one another in the slightest. Since ~/bin is added to $PATH
at login by the default ~/.profile on Debian, it just works out of
the box.

I never really had much luck with pip’s “user installs” (once those
became a thing), because regardless of whether they’re earlier or
later than system packaged Python libs in your sys.path you can
still end up with one shadowing the other and causing problems for
applications run as your user. Yes they might “just work” in a lot
of cases, but in the handful of cases where they break horribly
you’re back to using a separate venv anyway. Getting in the habit of
always using venvs seems less newcomer-hostile than expecting them
to diagnose weird library version conflicts that can lead to
incomprehensible errors or, worse, subtle misbehavior that goes
unnoticed for a long time.

1 Like

As far as I can tell, this is quite close to what pipx does, with additional management features (upgrade, upgrade all). For Python applications (CLIs and so on) like pepotron mentioned earlier, pipx has a great user experience. I think it is good that pipx is mentioned in the error message. I encourage people to give it a try if they have not yet. Which makes me wonder if pipx is even available in the system package repositories where this error message is shown. And also makes me think that maybe pipx should replace pip, whereas pip should only be available in virtual environments.

But of course pipx is not perfect from my point of view. As I mentioned in an other thread here and here, I’d rather not have to use 2 different tools to do the same thing: apt and pipx to install applications. The good thing with using the system package manager (apt in this case), is that it notifies me when updates are available (while pip, pipx, conda, and so on do not / can not).

I guess this post is somewhat tangential to the original post, and not fully on topic, sorry about that.

1 Like

Here is the code I use that works on debian, ubuntu and fedora.

PY_VER=$(python3 -c "import sys;print('%d.%d' % (sys.version_info.major, sys.version_info.minor))")

mkdir -p ${LOCAL_BIN}

if [[ ! -e "${LOCAL_VENV}/bin/python${PY_VER}" ]]
    echo "Removing venv built for old python version"
    rm -rf "${LOCAL_VENV}"

if [[ ! -e "${LOCAL_VENV}" ]]
    echo "Creating tools venv"
    python3 -m venv \
        --system-site-packages \

${LOCAL_VENV}/bin/pip install --upgrade --quiet \


${LOCAL_VENV}/bin/pip install --upgrade --quiet \

    ${LOCAL_VENV}/bin/pip list | grep ${PKG}

for TOOL in \
    colour-filter \
    colour-print \
    ssh-wait \
    update-linux \
    ln -sf "${LOCAL_VENV}/bin/${TOOL}" "${LOCAL_BIN}"

The ALL_PACKAGSES list is where I add or remove tools installed from PyPI.
The last for loop symlinks the tool scripts into ~/.local/bin that is on my PATH.

When the OS install python3 version changes the venv is recreated, otherwise just upgraded.

1 Like

While all of these options seem reasonable for someone who knows everything that’s going on, that isn’t really an important use-case here, since I’m perfectly happy compiling a separate entire Python (or several, since I like having lots of versions), bypassing the entire issue. How is someone supposed to do this in a “normal” situation? It’s way too much hassle to do ANY of these examples.

I’m definitely tempted to revert to the previous model by just removing that file, which really isn’t the point of PEP 668. A single global venv seems potentially promising, but also not materially different from user installation. Why is user installation not a better-supported pattern?

sudo apt install pipx
pipx install pepotron

seems like a normal situation and not too much hassle to me. But maybe that is not the use case you really have in mind.

If we look only at applications I guess my hope is that in the near future it is straightforward and common practice to publish applications such as pepotron to something like flatpak / flathub. I think for me that would be a good compromise. I believe application distribution is not something that should be solved within the borders of the Python package ecosystem only. I do not believe that long term we should expect ordinary users to have to use pip, pipx, or venv. Libraries is a different story.

1 Like

But that’s completely different from the way of installing any Python package that you intend to actually import, right? This works ONLY for the one use-case of “install an application that happens to be written in Python and distributed on PyPI”. So that means there are a number of competing and completely incompatible ways to install things.

1 Like

The usual point of view of distribution package maintainers, the
ones making the decision to implement PEP 668 for the interpreters
they distribute, is that users should be installing and running
applications from that distribution’s packages. When users want
something that’s not packaged in the distribution yet, it’s an
opportunity to request an addition or get involved in the
community’s packaging activities more directly.

Debian still doesn’t ship ensurepip and venv modules as part of the
default python3 interpreter and libpython3-stdlib packages, though
as of a couple of releases ago choosing the optional python3-full
package has started to pull them in (plus the testsuite, idle,
distutils, gdbm, 2to3, tk…). Using pip to install things from
outside the distribution is already viewed as an advanced “if it
breaks you get to keep both pieces” kind of activity, so it doesn’t
seem there’s much interest on the part of the distribution in making
that perceived foot-cannon any easier for unwary end users to fire.

1 Like

I guess I misunderstood the original post and the use case(s) it is about then.

For me, if one wants to write Python code, they need to know about virtual environments (sure technically it is possible to depend only on the standard library and/or whatever is installed by the system package manager, but I guess that is out of scope). Typically on Debian/Ubuntu I never have pip installed globally (the only package that I apt-install is python-venv). If I remember correctly with Windows installers it is also possible to skip the installation of a global pip. And this is what I recommend to everyone (beginners and experienced users). This way, if one wants to pip-install 3rd party Python libraries (i.e. something from PyPI), the only way is from within a virtual environment.

1 Like

There is also:

(contributors to this thread probably know this already but readers might not)

1 Like

Until 668 was implemented the choice to use venv was up to the developer or student.

Now with 668 being implemented you are forced to know about venv.

That means that people learning have one more a student must learn about.

1 Like

You can’t make a choice if you don’t understand the options. But I think I get your point.

1 Like

For others, I consider venv to be an advanced option.
Not something I’d like to be teaching early on in a students journey with python.

1 Like

My go-to advice for people is just install miniconda in your home directory. If you don’t want to learn about conda environments, you don’t have to. If you manage to break your base environment, just delete ~/miniconda3 and reinstall it.

1 Like

My duck duck go fu is failing me, please could you link that. I’ve come to the conclusion that ‘never use the system Python’ is a safe, if high-effort, policy but I’ve never seen official advice to that effect (not doubting it exists, just never really looked very hard)

1 Like