Proper way to do user package installation in a post-668 world?

And that’s why it’s bad to install into the system directory. I’m not disputing that. But there is supposed to be a user directory. On my main system, sys.path has both /usr/local/lib/python3.12/site-packages and /home/rosuav/.local/lib/python3.12/site-packages - so the system ones will take precedence if there’s a conflict, but I can happily install stuff into my home directory without problems. And this happens entirely by default.

So why doesn’t this happen with the vanilla Debian install, and how can I achieve this?

1 Like

At least on my Debian systems, ~/.local/lib/$PYVER/site-packages
comes earlier in sys.path than /usr/(local/)lib/$PYVER/dist-packages
which would potentially cause a problem if I pip install --user
something and it ends up shadowing distro-supplied libs.

I’m not sure exactly how to achieve what you’re wanting on Debian,
since long before PEP 668 I’ve been compiling and altinstalling
Python into my homedir and also using venvs for basically everything
I want to install from PyPI anyway. It’s a habit picked up long ago
from RHEL’s long-standing recommendation that you not use the
distro-supplied Python to run anything except distro-packaged
applications, but it seems to apply equally in other distros (doubly
so now that PEP 668 is taking hold).

1 Like

That seems rather user-hostile to me. Virtual environments are NOT sufficient for packages that should provide command-line tools (such as the aforementioned pepotron). User installation should be able to handle this, and it avoids the upgrade problem (since apt is never going to touch packages in ~/.local).

If that’s really how it is, I’m going to recommend overriding the lock and going back to allowing sudo pip install, since the recommended way is worse than that.

2 Likes

I raised this a while ago and no one cares to fix this.

Personally, like you, i have the skill to work around the annoyance.

What I do is maintain one venv that i put on my path and install tools into it.

I will update this later with an example for people not sure what to do.

2 Likes

I use individual venvs like ~/lib/$tool and symlink ~/bin/$tool to
~/lib/$tool/bin/$tool (no need to “activate” these venvs for any of
the command-line tools I run, and there are dozens). Keeping them in
separate venvs means I don’t have to worry about whether they’re
coinstallable, since they can have conflicting requirements and not
affect one another in the slightest. Since ~/bin is added to $PATH
at login by the default ~/.profile on Debian, it just works out of
the box.

I never really had much luck with pip’s “user installs” (once those
became a thing), because regardless of whether they’re earlier or
later than system packaged Python libs in your sys.path you can
still end up with one shadowing the other and causing problems for
applications run as your user. Yes they might “just work” in a lot
of cases, but in the handful of cases where they break horribly
you’re back to using a separate venv anyway. Getting in the habit of
always using venvs seems less newcomer-hostile than expecting them
to diagnose weird library version conflicts that can lead to
incomprehensible errors or, worse, subtle misbehavior that goes
unnoticed for a long time.

1 Like

As far as I can tell, this is quite close to what pipx does, with additional management features (upgrade, upgrade all). For Python applications (CLIs and so on) like pepotron mentioned earlier, pipx has a great user experience. I think it is good that pipx is mentioned in the error message. I encourage people to give it a try if they have not yet. Which makes me wonder if pipx is even available in the system package repositories where this error message is shown. And also makes me think that maybe pipx should replace pip, whereas pip should only be available in virtual environments.

But of course pipx is not perfect from my point of view. As I mentioned in an other thread here and here, I’d rather not have to use 2 different tools to do the same thing: apt and pipx to install applications. The good thing with using the system package manager (apt in this case), is that it notifies me when updates are available (while pip, pipx, conda, and so on do not / can not).

I guess this post is somewhat tangential to the original post, and not fully on topic, sorry about that.

1 Like

Here is the code I use that works on debian, ubuntu and fedora.

PY_VER=$(python3 -c "import sys;print('%d.%d' % (sys.version_info.major, sys.version_info.minor))")

LOCAL_BIN="$HOME/.local/bin"
LOCAL_VENV="$HOME/.local/tools.venv"
mkdir -p ${LOCAL_BIN}

if [[ ! -e "${LOCAL_VENV}/bin/python${PY_VER}" ]]
then
    echo "Removing venv built for old python version"
    rm -rf "${LOCAL_VENV}"
fi

if [[ ! -e "${LOCAL_VENV}" ]]
then
    echo "Creating tools venv"
    python3 -m venv \
        --system-site-packages \
            "${LOCAL_VENV}"
fi

${LOCAL_VENV}/bin/pip install --upgrade --quiet \
    pip

ALL_PACKAGES="colour-text
    colour-filter
    ssh-wait
    update-linux
    "

${LOCAL_VENV}/bin/pip install --upgrade --quiet \
    ${ALL_PACKAGES}

for PKG in ${ALL_PACKAGES}
do
    ${LOCAL_VENV}/bin/pip list | grep ${PKG}
done

for TOOL in \
    colour-filter \
    colour-print \
    ssh-wait \
    update-linux \
    ;
do
    ln -sf "${LOCAL_VENV}/bin/${TOOL}" "${LOCAL_BIN}"
done

The ALL_PACKAGSES list is where I add or remove tools installed from PyPI.
The last for loop symlinks the tool scripts into ~/.local/bin that is on my PATH.

When the OS install python3 version changes the venv is recreated, otherwise just upgraded.

1 Like

While all of these options seem reasonable for someone who knows everything that’s going on, that isn’t really an important use-case here, since I’m perfectly happy compiling a separate entire Python (or several, since I like having lots of versions), bypassing the entire issue. How is someone supposed to do this in a “normal” situation? It’s way too much hassle to do ANY of these examples.

I’m definitely tempted to revert to the previous model by just removing that file, which really isn’t the point of PEP 668. A single global venv seems potentially promising, but also not materially different from user installation. Why is user installation not a better-supported pattern?

2 Likes
sudo apt install pipx
pipx install pepotron

seems like a normal situation and not too much hassle to me. But maybe that is not the use case you really have in mind.

If we look only at applications I guess my hope is that in the near future it is straightforward and common practice to publish applications such as pepotron to something like flatpak / flathub. I think for me that would be a good compromise. I believe application distribution is not something that should be solved within the borders of the Python package ecosystem only. I do not believe that long term we should expect ordinary users to have to use pip, pipx, or venv. Libraries is a different story.

1 Like

But that’s completely different from the way of installing any Python package that you intend to actually import, right? This works ONLY for the one use-case of “install an application that happens to be written in Python and distributed on PyPI”. So that means there are a number of competing and completely incompatible ways to install things.

1 Like

The usual point of view of distribution package maintainers, the
ones making the decision to implement PEP 668 for the interpreters
they distribute, is that users should be installing and running
applications from that distribution’s packages. When users want
something that’s not packaged in the distribution yet, it’s an
opportunity to request an addition or get involved in the
community’s packaging activities more directly.

Debian still doesn’t ship ensurepip and venv modules as part of the
default python3 interpreter and libpython3-stdlib packages, though
as of a couple of releases ago choosing the optional python3-full
package has started to pull them in (plus the testsuite, idle,
distutils, gdbm, 2to3, tk…). Using pip to install things from
outside the distribution is already viewed as an advanced “if it
breaks you get to keep both pieces” kind of activity, so it doesn’t
seem there’s much interest on the part of the distribution in making
that perceived foot-cannon any easier for unwary end users to fire.

1 Like

I guess I misunderstood the original post and the use case(s) it is about then.

For me, if one wants to write Python code, they need to know about virtual environments (sure technically it is possible to depend only on the standard library and/or whatever is installed by the system package manager, but I guess that is out of scope). Typically on Debian/Ubuntu I never have pip installed globally (the only package that I apt-install is python-venv). If I remember correctly with Windows installers it is also possible to skip the installation of a global pip. And this is what I recommend to everyone (beginners and experienced users). This way, if one wants to pip-install 3rd party Python libraries (i.e. something from PyPI), the only way is from within a virtual environment.

1 Like

There is also:

(contributors to this thread probably know this already but readers might not)

1 Like

Until 668 was implemented the choice to use venv was up to the developer or student.

Now with 668 being implemented you are forced to know about venv.

That means that people learning have one more a student must learn about.

1 Like

You can’t make a choice if you don’t understand the options. But I think I get your point.

1 Like

For others, I consider venv to be an advanced option.
Not something I’d like to be teaching early on in a students journey with python.

1 Like

My go-to advice for people is just install miniconda in your home directory. If you don’t want to learn about conda environments, you don’t have to. If you manage to break your base environment, just delete ~/miniconda3 and reinstall it.

1 Like

My duck duck go fu is failing me, please could you link that. I’ve come to the conclusion that ‘never use the system Python’ is a safe, if high-effort, policy but I’ve never seen official advice to that effect (not doubting it exists, just never really looked very hard)

1 Like

I’d be hard-pressed to find it now. It came to a head in the days
when the RHEL package maintainers were “backporting” Py3K features
into Python 2.x because much of the system tooling and package
management was written in Python and they wanted to be able to take
advantage of some new functionality in the language without porting
everything. Not long after that they started rewriting a lot of it
in C, but by then the damage was done. I recall being pointed to a
KB article at the time, but as I don’t have a RHEL subscription now
I can’t access those, and they’re not indexed by public search
engines anyway.

1 Like

python -m venv --system-site-packages can be useful. Packages in a venv created by this command can easily import packages in the system environment, on the other hand, system packages won’t be affected by this venv. It just like a better pip install --user. For example:

$ python3.11 -m venv --system-site-packages ~/.local/share/global-venv
$ alias python=~/.local/share/global-venv/bin/python
$ alias pip="python -m pip"

$ sudo pacman -S ipython python-requests
$ pip install ipython
$ ipython  # ipython in the system environment
$ python -m IPython  # ipython in the global venv
$ python -c 'import requests'  # works well in the venv

$ pip show pip  # pip in the global venv
Name: pip
Version: 23.2.1
...
Location: ~/.local/share/global-venv/lib/python3.11/site-packages
...
$ unalias python
$ pip show pip  # pip in the system environment
Name: pip
Version: 23.2.1
...
Location: /usr/lib/python3.11/site-packages
...
1 Like