Given the number of invisible environment variables and scattered configuration files that change the behaviour of installation, the existence of a directory seems like an interesting place to draw the line.
(Your comments on behaviour getting weird when you mix up Python versions is fair, but also the nature of a transition period. In ten years time, nobody will ever think about that again.)
I am surprised that PEP 582 is submitted to SC without adding some topics in this thread.
Especially, there is no offical bin directory in PEP 582. python -m sometool is recommended and commands other than Python package is not recommended anymore.
I am strong -1 on PEP 582, because PEP 582 doesn’t cover many typical use cases.
People need to learn the difference between PEP 582 and venv, when to use PEP 582 and when need to use venv.
For example, consider webapp project having this layout:
.venv/ # venv for project. __pypackages__ in PEP 582.
tools/ # some script files in here.
myapp/ # Application code including wsgi entry point.
This is very common setup. I read PEP 582 but I can not understand these questions:
How Python scripts in tools/ can use __pypackages__ in project root?
How to add myapp/ in import path?
Do I need to make myapp installable package and use pip install -e .?
Unless PEP 582 covers most venv use cases, I can not support PEP 582.
Defining standard venv directory name is much better than PEP 582.
If pip support “standard venv directory”, user can use pipenv like workflow with pip.
I will say that I don’t think a solution here has to be everything to every class of user.
What I do worry about is whether a solution is actually serving the people it purports to serve, in the way that wants to serve them, whether there’s a better and/or more general solution for those users, and whether it introduces hidden failure cases or issues that aren’t being seen, plus how it impacts people who the solution isn’t attempting to serve.
I’m hesitant on the PEP 582 idea, largely because I’m not sure that I see a big win here that can’t be solved another way, that reuses the tools and concepts that already exist. The __pypackages__ directory, as implemented in PEP 582, doesn’t give you a good stepping stone into other tools or other workflows. If you outgrow it and need to use virtual environments, there’s not an easy path forward that isn’t “transition everything you use to use a different isolation mechanism” or “bifurcate isolation mechanisms between projects”.
It also complicates things for existing users, who now have yet another isolation mechanism that they need to handle, understand, etc.
If we instead focused on something that revolved around venv, then we have a better path between workflows, new users can have something that tries to paper over the venv, but as they learn and grow, they can start looking behind the curtain at the underlying venv, and even decide that maybe they would prefer something like virtualenvwrapper that keeps all the same concepts they’ve already learned, but moves them into another location.
(excuse my newly invented shell syntax - adapt for your own language)
The primary thing PEP 582 is intended to deal with is having to understand the shell. If we can paper over “full path to Python” and “environment variables”, I’ll be happy. Even better if “clone a repo, double-click the .py file and it has all its packages” also works (as it would under 582, though there are a couple more options with that one).
A user in an IDE shouldn’t have to drop out of it to run shell commands. A user cloning a repository shouldn’t have to look up the “create an env and install packages” steps for it.[1] A user on PowerShell shouldn’t have to translate a .env list of environment variables. There’s a huge reliance on being a shell expert just to get basic Python stuff running, and anyone who has run a workshop at a PyCon has struggled to get all their attendees past this point (wonder why hosted Jupyter is so popular? No shell skills required. Don’t believe me? Go to PyCon and check out the tutorials.)
I’m far more concerned about users growing up from installing apps on their phone to writing Python scripts than I am about users growing up from “my packages go where I’m working” to “I designate a specific location for packages and tools that I’m currently working with”.
If you have an idea that handles this as a layer on top of the current tools without needing to install more shell integrations, please, go ahead and share it. If it looks like it works, I’ll back you all the way - I’m not wedded to this idea. But speculation that such an idea might exist isn’t the same thing.
Choosing a default requirements.txt file/equivalent is definitely not covered here, but it doesn’t need to be. ↩︎
Teach python how to locate a well known named virtual environment and recommend tools install into it by default?
Like you could change PEP 582 to just turn __pypackages__ into a virtual environment, and teach python how to activate that instead. It’s roughly the same thing from a “I am starting from zero” UX, but now if they graduate to say, needing to access something that was installed into bin/ (I don’t think PEP 582 discusses how to handle bin/ ~at all), that provides a natural stepping stone to be like, oh hey, actually everything got installed into this directory, and you can just open it up and run commands directly from there too! Which then provides a stepping stone into “well that well known directory can just be located anywhere you want”.
To turn this around, what’s the benefit to creating somethign that is like a venv, but isn’t quite the same, when venv is right there? AFAICT none of the benefits of this PEP come from the not-quite-venv implementation of it, and all of it comes from the fact there’s some default location that python will load by default.
Let me summarize some problems raised recently in this thread that PEP 582 needs to clarify or improve:
1. OS-isolated library paths
The folder structure of the current proposal is not enough to isolate packages between different platforms, a possible solution is to name the directory with platform tags like cp310-win-amd64.
2. System site-packages ignorance
Partially agree, but it may cause problems for some “interpreter wrappers”. Let me explain, say we make a “beautiful python”, or bpython, with changes to the output and exception hook to make it output beautiful ANSI colors to the terminal, and distribute it as a python library uploaded to PyPI. A user installs it into the global site-packages with the system python. When he executes bpython /path/to/myscript.py, should system site-packages be ignored? If we are to ignore it, bpython will be broken. But if we don’t, and put __pypackages__ to the sys.path in front of system site-packages, chances are incompatible versions of dependencies in __pypackages__ being prioritized and, again, break bpython(refer to pdm#849). It seems this problem is non trivial to solve without tweaks to the interpreter on how and when site-packages are loaded.
3. Problem of finding __pypackages__
I agree that this proposal must be extended with how an interpreter looks for the __pypackages__ to load. It shouldn’t be restricted to the same directory in which the script resides. The practice of PDM is to find the current directory and its recursive ancestors at a configurable max depth.
4. Project-level __pypackages__
In addition to the last point, a project-level __pypackages__ must be seen by all scripts and modules inside it. A project can be defined by a pyproject.toml in the root. This is also what PDM is doing at present.
5. bin handling
I highly suggest PEP 582 to consider this, making __pypackages__ a full install scheme in sysconfig, with bin, includes, lib, and others. It seems the current PEP only focuses on running a standalone script, and ignores usages inside a project, while in the latter case, it is common to install binaries as well as libraries into the project-level __pypackages__, and users may expect them to run with pip run <executable>
P.S. For all the problems mentioned above, PDM has solved 3,4,5, partially solved 1, and hasn’t solved 2.
Yeah, because I burnt out on this discussion and never updated the PEP with any of the feedback from the first ~100 replies But if I did, it would say “if you need bin/ and -m won’t do, use a venv”
no env variables
no symlinks/launchers
no absolute paths embedded in configs/executables
no PATH manipulations
no additional tools needed to be known or acquired
<Deleted: one extended rant which basically shows me that I’m still not ready to come back to this topic… whoever picks this one up, good luck. I’m muting this again.>
This already happens with -S. It would break if I try to use that option with bpython, so you will either need to figure out a way to make that work, which would also fix the isolated PEP 582 directory issue, not present it as a python executable replacement, or distribute it some other way.
Overall, I think this is a small price to pay to make sure PEP 582 installs don’t break when something on the system updates.
Totally off-topic, but if anyone is aware of any standard or anything around environment variable definition files like .env, please let me know! It’s a constant headache for me at work that there isn’t one, especially when everyone seems to assume they are formatted for the OS you’re running which is not necessarily the case (i.e. Steve’s comment when someone wrote a .env file for bash; not every shell uses : as a path separator).
I made this recommendation over in the PEP 704 thread, so I’m +1 at least on the standardized naming/location scheme for where to look for virtual environments when one isn’t explicitly detected via $VIRTUALENV. As for python (but not python3 or python3.11) garnering the smarts to automatically use an environment, I’m probably +0.
I implemented pipx run script.py which picks up dependencies from a comment block in the script. The PR is awaiting merge and a new release of pipx, which is down to the pipx maintainers’ availability.
This is getting way off-topic, but whether or not building universal2 wheels is hard primarily depends on whether or not you have external dependencies. For simple projects that only use system libraries building universal2 wheels is automatic when using a universal2 python installation (such as the ones we ship on the python website).
For projects with external library dependencies and/or other languages than C (e.g. Fortran) it can be harder to get those dependencies in a “universal2 form”. Still doable, but takes more effort and that’s effort that can be spent on other work.
This. Or rather, a solution that makes doing something like this easy, without needing to manually manage the environments. Not as a competing proposal, but as a complementary one, for use cases where it makes sense. Most solutions I’ve seen require the user to manually remember which script uses which (named) environment.
The pipx solution works somewhat like this, but it doesn’t share environments except in a very basic way, and its handling of cleanup (treat the environment as a cache, so it gets deleted when it’s not used for a while) isn’t that good.
And implicitly, I’m asking that we don’t exclude workflows like this by accepting __pypackages__ or .venv, and then baking it into tools and tutorials and documentation to the point where people with different backgrounds/preferences have to “fight the system” to work the way they want to.
I never set environment variables, and I limit what I put in config files. But yeah, fighting to eliminate configuration details (for example when trying to track down a bug) is often annoyingly hard.
And if you saw my working environment you’d appreciate why I hate triggering a behaviour change on the presence of a magic directory (or file). I routinely work in a “scratch” directory full of small Python scripts, C programs, HTML files, Powershell scripts, etc. If something dumped a __pypackages__ directory in there, I’d quite likely not notice for ages (until it caused a problem which meant I was trying to track down a bug, most likely).
This is a good point. For example, if I have an activated virtual environment, and I run python in a directory with a __pypackages__ subdirectory, which takes precedence? The virtual environment or the __pypackages__? How do we explain that to users (newcomers and/or experts)?
@pf_moore you are actively against this PEP because of your despite of how works very well and understood in other languages and you have some sort of preferences with scripts having the dependencies on top very similar to the approach in Deno (then what’s the point of having pyproject.toml at all??)
@frostming and @brettcannon already provided solutions (or partial solutions) to all the problems mentioned, not sure why this PEP should be rejected or delayed.
I saw using PDM a lot of people and they didn’t have any issue understating the workflow with this PEP.
No need to be pushy here — the PEP authors will update the PEP to address the criticism, and discussion is going on. Ignoring or dismissing people’s concerns is not the way to advance this.
Prefacing this with the fact that I haven’t read all of the almost 300 comments in this thread but stumbled upon it by finding the PEP submission.
Has anybody considered the impact this would have on common linters and tooling? As a pylint maintainer I foresee many issues with adding another layer of complexity to an import system that is already quite hard to mimic while not actually executing any import statements directly (we don’t want this because of code execution concerns).
Adding stuff to sys.path depending on certain conditions would certainly be something we won’t be able to add support for in a simple fix.
I haven’t seen any comments about the effect of this on other tooling but I wonder if they share similar concern. Obviously every tool is different so I wonder if this is purely static linter specific, or if it is a broader concern.
I understand that a simple counter argument would be that if you want to use such tooling you should consider a virtualenv anyway, but this would then increase the barrier of adoption for tooling for new Python users.
What’s the specific complexity you’re foreseeing? From a Python perspective this would just be another directory on sys.path. Granted there would be some calculation as to what directory to use, but it’s no worse than where the stdlib is kept or where site installs are kept.
Pylint patches sys.path based on the directory of the file that is being linted and the arguments that were given when invoking the tool. This is already causing us headaches as this is near from perfect but for now the only way we found to avoid code execution. This also means that the sys.path pylint uses to evaluate imports is not the same as the one you would get by just invoking the interpreter. (We’re open to better solutions… )
Similarly I wonder whether users would (for example) expect pylint to add the new directory to sys.path if it was found at the level of the file being linted even though it is not available to the interpreter that is being used to run pylint itself.
For example, what if I have multiple hobby projects with different package directories that all live under a larger “Python projects directory”. If I understand the proposal correctly you wouldn’t be able to lint these files from the top level directory but would need to enter each individual directory itself to get the correct sys.path. As a new user I would definitely find that confusing I think: why does this tool behave differently based on the directory I’m calling it from? (I’m specifically thinking of people just using the direct pylint command without a python -m)
We get some reports every few months about pylint not finding imports that “work perfectly fine when I execute the file”. From an initial reading of the PEP this seems like it could cause further confusion as it adds another variable to deal with.
But perhaps we should also just see it as a test for our import system! I mostly just wanted to see if others had similar (initial) reservations/thoughts.