I’ve got a couple of machines that have only 512MB of RAM, and have some 24/7 services running on them so not all of that is available. They also have some Python 3.13 virtual environments into which I need to install packages occasionally when there are new versions released.
Some of those packages have dependencies that use native code (C or C++), which means pip install of the entire dependency tree ends up using GCC or Clang to build that native code… and the machine runs out of memory.
I’ve got my own package index available and I could upload self-built wheels into it, which would then allow me to add –only-binary :all:to the pip command line and ensure that it never tries to build native code on the target machine. The trick is getting all of them built… is there any reasonable approach to taking a requirements.txt and building wheels for every package found via the resolver from that list of requirements?
Interestingly pip wheel doesn’t behave exactly as documented, although its behavior is what I wanted
The documentation says it will ‘Build Wheel archives for your requirements and dependencies.', but in actuality it will download wheels which are already available and only build them for packages that don’t have wheels available for the target system.
Combining pip wheel and uv publish I’ve now got a small script which can take a number of requirements files and ensure that local wheels exist for all of them, then upload them to my own index. Since my index is actually a ‘virtual index’ which combines an internal repository and PyPI, only the wheels that are unique are actually uploaded.
pip install, pip download, and pip wheel all share the same collector so they all act about the same when it comes to choosing sdists or binaries.
That is you can pass on the options --only-binary :all: for all wheels, --no-binary :all: for all sdists, and --prefer-binary for taking older wheels even when newer sdists are available.