Sorry for the delay… I’d like to summarize things, the way I see them now. Though first I’d like to present another gist. It describes the approaches to running sites under Docker locally that I’m aware of. Because what follows depends on it.
So, on one hand you can indeed create a virtual environment in a container. Which sounds somewhat superfluous to me. Meaning, as long as you’re pretty specific about the image (e.g. python:3.9-alpine3.13
), there should be no reason to isolate yourself from the system Python. But by means of using a virtual environment you can make it visible from the host.
On the other hand you can install the packages to their default location (/usr/local/lib/pythonX.Y/site-packages
), without creating a virtual environment. That way they won’t be visible on the host. But you can always launch vim
or something in the container and inspect them all you want. And under macOS you are probably out of other options. I mean you can create a volume with a virtual environment, but there seems to be no slightest benefit to it in this case. Creating a virtual environment is an extra effort, but the packages won’t be visible from the host with or without it.
Then, as @uranusjr clarified, --target
is for embeddable packages. --root
and --prefix
… is probably for e.g. building OS packages (see link c) or something? --root
allows one to install files to a temporary root filesystem (make a directory that looks like a root filesystem), archive the directory and obtain a file (an archive) as a result that when unpacked puts the files in the proper places. --prefix
is needed if you want to put the files at some custom location, like /opt
, under that temporary root filesystem. Although at least in Arch Linux they prefer… setuptools
, or distutils
? I’m not sure. Which kind of makes sense. python setup.py install --root=tmproot
is like building a package from source, pip install --root tmproot ...
?.. like installing packages to be used by something else? For example, by a Python script, that wasn’t published on pypi(dot)org for whatever reason, but uses some Python packages. To build an OS package for the script it makes sense to use pip install --root=...
to install the dependencies.
Let me first make it clear the way I see the solution from the original post now:
FROM python:X.Y-alpineZ.A
ENV PYTHONPATH site-packages
COPY .pydistutils.cfg ~/
...
With that you can do docker-compose exec site pip install -r requirements.txt
to install the packages. ~/.pydistutils.cfg
makes the packages get installed into site-packages
, PYTHONPATH
makes python
find them.
Why do I not do pip install
in a local (development) Dockerfile
? Because if I later bind-mount .
(host) into /site
(container), then the virtual environment at /site/env
exists in the image, but visible neither from the container, nor from the host. I.e. I need to do pip install
after launching a container anyway, why do it in Dockerfile
then?.. The effect is nullified by bind-mounting a project root into the container.
But indeed with the original solution one must know what PYTHONPATH
is, what the content of .pydistutils.cfg
means. And indeed without a virtual environment one would generally look for packages either at /usr/local/lib/pythonX.Y/site-packages
, or at ~/.local/lib/pythonX.Y/site-packages
. Why they are not there and what affected it, it might not be easy to find out. That can be somewhat mitigated by adding comments, but still a downside.
First, because it sounds like a virtual environment inside a virtual environment (docker
). Also, probably because Python is somewhat different compared to other languages in this respect. With Ruby you have bundler
which is like pip
to Python. And you have chruby
/rvenv
/rvm
which let you have several versions of Ruby installed alongside one another. So venv
(probably) naturally comes off as something akin to the latter (chruby
/…). Since a virtual environment contains its Python (or so it seems) and all. Then I saw no chruby
/… stuff in docker
containers. Neither the case is with Node.js, PHP, and probably others. Which made me think virtual environments are not needed there as well. And by the way, no venv
in the docker
tutorial (see link d).
I believe once such command gave me a message along the lines of something being missing in pip._internal
, but that was probably a rather old pip
.
Why? I can see only one reason if you see a path in the output, you can use it as is both on the host and in the container. Which is kind of mild, but okay, an upside.
P.S. Maybe something is different about the way we use Docker for development, and as a result we’re having a hard time understanding each other about the pip
/venv
stuff.