Perhaps this comes as a TLDR;, seen the discussion too late and the answer may be right here. I’d appreciate this (a canonical page about it).
True, there are tools for it but, lets say (and I came to this because I just got this advice about externally managed environment) I have a system which will only run a single application:
virtual environment is bit of overhead. For every script I keep up on that system, I need to wrap the command in a sh script that first activates the environment.
apt install python3-xyz don’t manages requirements files, many packages may not linked yet to OS packages installers too
pipx doesn’t manage requirements file (not straight as pip).
So which is in this case, the best solution, that does not require me much overhead, keeps up with requirements and dont require to break system packages?
I’d like to know it, for future projects.
For now, I’ve set up virtual environment but have to admit, it was bit of pain…
You don’t! Scripts have a shebang rewritten to point to the venv’s python, which knows how to find its lib directory. So you can run the script directly (with its full path, or via a symlink in a dir on PATH, or after adding the venv bin dir to PATH).
pipx supports a recent specification that lets you write the dependencies you need in a special comment inside the script itself (creating a venv for you).
Multiple tools now support this spec, and very recently uv has supported locking for scripts (https://github.com/astral-sh/uv/releases/tag/0.5.17) so you can reproduce the same Python packages on another machine.
Yes for single file scripts you can use inline dependencies, for larger projects if you can make them pip-installable you can use console script entry points to generate the script files that will launch using the correct interpreter.
If you just have a collection of scripts and a requirements.txt file I’m not aware of a tool that can make that work easily with creating a venv and pointing the scripts at it.
Ah, that’s interesting that they’ve chosen to do locks like that. I was doing the same with ducktools-env except using the requirements.txt pip compile format which means our files clash and the existence of one .lock file prevents the other tool from opening it.
As ducktools-env has a userbase of, well, probably just me at the moment[1] I’ll change my lockfile naming scheme so the existence of the lockfile I generate doesn’t prevent uv from launching the script and vice versa. Hopefully PEP-751 will eventually resolve to some standard lock file format every tool can install from.
One day I’ll learn to tell people about things I’m working on. The main feature for this is bundling PEP-723 scripts into zipapps that can then be launched without needing to have a script runner installed. ↩︎
Thanks for the advice. I’ll give it a look, though I don’t like to be dependent from external tools.
Still, i can’t avoid to point out that in several occasions, it will be a non necessary overhead.
We are developers, a warning could be enough instead of a blocking error, because most of times, it’s not laziness, we know what we are doing.
I wonder (I could imagine yes but I’m gonna verify it) if this happens too inside a container, which is already a separated environment, mostly dedicated to a single system and all the Dockerfiles crashing because of this
It is a warning, in the sense that you can use the --break-system-packages option to override it.
And to be clear, it’s the distribution that controls the existence of this message. If you’re managing your own distribution (which is essentially what you are doing in a container) you could consider removing the file that triggers the message[1].
I’m not going to say what that file is - if you’re not motivated enough to go and find out, I’d be concerned that you weren’t going to be comfortable taking responsibility for the possible impact of removing it ↩︎
It’s not a warning, since it blocks all of the previous version if users don’t do something (flag, file deletion, additional scripts/external tools/script changes).
The advice is exactly what this topic says to not suggest, I’ve seen the file to be removed above or in the above mentioned topics, I know the flag or even how to just stick to this rule.
I am just pointing out I don’t think it was necessary to add it as a mandatory requirement, more like a strong suggestion (especially for development machines)
I am just pointing out I don’t think it was necessary to add it as a mandatory requirement, more like a strong suggestion (especially for development machines)
And as has already been pointed out, that file is added by your Linux distro (based on a thoroughly documented behavior for tools which respect its presence), so complaining to the Python community about it doesn’t really help. If you want something changed, bring it up with the people who maintain your Linux distribution that installs the file.
It was the distros that requested this feature so that users will be told that they are about to potentially break their system. Too many tools that are critical is a system are written in python and PyPI packages can and do break them. There is not a chance of getting the distros to change their position on this.
Use a venv, it’s easy to do. I setup one that I put all my PyPI packges into and use that one venv for all my personal scripts.
Yes I’m not complaining about anything.
I am just pointing out that the suggestion may not be so wrong, depending by the system (and asking which are the guidelines for the lower ovwrhead possible, in terms of work and tools).
I must’ve missed an update though. I installed new pip on an existing system (usually not connected to the web if not for changes purposes) and noticed it.
I am just pointing out that the suggestion may not be so wrong, depending by the system (and asking which are the guidelines for the lower ovwrhead possible, in terms of work and tools).
When I originally wrote this post the top suggestions gave no context, they told the users to delete the externally managed file or use break system packages. Those solutions may be the correct solution for you, but, in general, they are not the correct solution for everyone.
Deleting the file as the first solution with no context contradicated the reason the distros include the file, for users to first consider solutions solutions like venv, pyenv, pipx, uvx, etc. However, the answers have now been updated to include some of solutions distros wish users to be pointed to, this thread is many months old.
It’s not “depending on the system” so much as it’s depending on whether the user understands the implications and is willing to accept the consequences.
Advice on the internet is typically taken without acknowledging that nuance, so you get people overriding the warning, breaking something critical, and then complaining that they don’t know what to do now. You shouldn’t override the warning if you’re not confident in your ability to fix a broken OS by yourself.
I have no idea whether you, personally, are capable of handling a situation where you break the OS. So I can’t advise you. I can say that breaking the OS in a container is, in my experience, less of a problem than breaking the OS on an actual system. You delete the container, fix the container build script to not break the OS[1], and then rebuild the container. So maybe it’s OK to override when using a container. You still have to accept that it’s your problem to fix, though, and the internet won’t help you (in particular, neither the Python packaging community, nor your distro support channels, will have much sympathy…)
Not saying it is wrong to point out that, with no context, it’s a bad advice.
I didn’t see anyone pointing out that there are several situations where this change is actually requiring an unneeded overhead, so i wanted to bring also a different point of view (because I landed here for searching what changed and where).
@barry-scott I heavily disagree with that, for so many reasons I won’t be able to describe entirely.
If a project installs an additional package, it’s very much likely it is not a default one.
If a system tool relies on a python package, I hope it is maintained and adapted the latest version (otherwise, it should use its own venv?)
It is higly probable I have much more knowledge on my specific system, than someone who designed a general purpose tool (trying to cover most of cases, but not all, none of us owns a crystal ball).
All of these (and much others) IMO don’t justify a feature which is breaking probably thousands of tools (I am always on the point that, if you gotta break a lot of things made before, there must be a very very good reason, I don’t see it, none of the answers brought one).
(note: I added [Split] to the title written by a mod to make it clear that this discussion was created by splitting out posts from a longer thread. See the link below the first post for context)
Often people update a package that is used by a system tool and the new version has a breaking API change or subtle behaviour change.
At the point the system ships that is likely true but may not be in 2 months time
You may do, but many people do not have the time to become knowledgeable admins and will break their system if blinding assuming that can disregrage the warning.
Converting to venv is a one-off work flow change.
Once done the user will not experience system failure, often beyond their knowledge to fix.
For 1 and 2 then it was probably better to check if the “pip install” is actually going to change some packages that system tools requires (which then shall update a common repo of requirements, with their versions) and block only if so, instead of blindly block it. That seems anyway much difficult to happen and i could just be warned if I am requiring a downgrade (when a fresh install is made, all of the packages and requirements will hopefully be at latest version, otherwise tools shall make their own venv).
To sum up since this is the topic after the split (thanks @MegaIng ), the possibilities to have an effective and self-configuring production (I would define it more like an end-target deployed) environment, are:
Make a venv. Which then requires:
change the installing scripts
change (or make) executing scripts to activate the venv first
Shebang with venv and requirements directly inside the script in dedicated comment:
has the drawback to mix up requirements, system-specific paths and code, which does not seem so good for versioning and for maintenance and migrations
External tools
Pack things in libraries:
not always possible. Think to a Django project with all of the apps with theirs .py files in it as example.
Feel free to add up either other points or corrections to these ones
What if a new version of a dependency gets released which is incompatible with one of the system tools?
Regardless, as @pradyunsg said in the originating thread, this is pip’s behaviour and it’s not going to change (at least, not without the various distro maintainers supporting any change). So let’s drop the discussion around using pip to install 3rd party packages into the system environment.