I think the idea is that a failure because the locked files are not available is fine. It should fail cleanly with clear diagnostic information (i.e. “X, Y and Z files are missing”) and should be simple for an administrator of the index to fix. Once the needed files are known to have been added it should be easy to ensure that they continue to be there in future. The idea would be to test these things at the time of updating the lockfile and then know that if you got it working before deployment it should continue working later using the same locked dependencies. What would not be fine is if adding more files to the index caused different dependencies to be used for any build process potentially causing what should be a known tested build to fail.
To make this more concrete the recent Cython release (0.29.31 - now yanked) broke building what was at the time the current SciPy release (1.8.1). The SciPy 1.8.1 build requirement allows cython==0.29.* meaning that as soon as Cython 0.29.31 was released it would be used by pip to build SciPy even though that configuration had clearly not been tested by anyone. If you’ve tested that your SciPy 1.8.1 builds correctly with Cython 0.29.30 in your target systems then in the locked configuration that should continue to work as before: the addition of Cython 0.29.31 to the index should not affect the SciPy build if the lockfile is unchanged.
I want to echo many of Greg’s comments as they seem to align well with how I think about them at my work (also a large company that uses Python heavily). We want some level of “reproducibility”, but are pragmatic about how we approach it.
Our main concern is to have a consistent set of inputs (a specific version for each direct & indirect Python dependencies) so that we can build a deployment. This is primarily a container image, but occasionally a VM. But in both cases, the OS and Python version are known and constant. Changing to a new OS version or Python version is an explicit action, so needing to generate a new lock file as a result is not a big problem.
Wheels are preferred because they “just work” and make libraries that are not pure Python much easier to work with. We understand that sdists can do anything, but they generally don’t. So wheels are not a requirement. The few times I’ve seen a setup.py dynamically generate install_requires, they fell into two camps:
Manually doing what environmental markers support
Reading requirements.txt into a list.
Including hashes for the sdist or wheel confirms that the data used when generating the lock file is the same data as what gets installed. However, for our concerns, byte-for-byte reproducibility is a nice to have and not a requirement.
Correct, based on what I have spoken to Steve about on this topic.
Yes, or even just the hashes if you ignored user experience.
That’s the dream.
That’s an open question as you have already moved away from reproducibility thanks to not being able to lock down to the OS (unless you’re Nix or using a specific container image), and so what are people viewing as “locked” in this instance?
You would probably have to somehow nest the information under the sdist declaration.
It might require something like build to be given the environment to build with and to prevent it from trying to install other things that didn’t come with the environment.
And potentially unexpected outcomes, such build failures simply due to different build tool versions, different files being made available, etc.
I think the folks who advocate for allowing sdists would need to speak to whether locking down the build tools is important to them.
I did explore this for myself, hence PEP 665.
How do you generate the indirect dependencies for sdists? Are you building the environment (and thus the wheels), and then locking down what gets installed (e.g. the pip-tools approach)? If so, what is preventing you from using those built wheels instead of constantly building the sdist (which implicitly creates a wheel) to do the installation? Is it the lack of delivery mechanism for those built wheels to your deployment?
I think this is where I’m personally having a hard time understanding what the expectation/gain is for the folks bringing up the “lock the inputs” idea. If you want all direct and indirect/transitive dependencies, something is doing a build to the point that the metadata is available. You should be performing this on the platform that mirrors where the installation will occur (or else you can end up missing dependencies that the build tool simply doesn’t cover on a specific platform). But are you actually assuming that an sdist, built to the point of metadata, will always return all dependencies for all platforms and use markers to have the installer handle things, no matter where the sdist is built? And so, , all you have to do is build the sdists and everything else is already in the lockfile (i.e. --no-dep)?
Maybe I wasn’t clear - I meant standardise (something like) what pip-tools and poetry currently do. Which includes sdists, unlike PEP 665. Or maybe you meant the initial discussions, when PEP 665 allowed for sdists?
Nope, you were clear. What I’m saying is I did my own personal exploration into what was already out there and decided PEP 665 was worth the effort because what was already being done didn’t support what I was after. When you said, “which no-one has explored yet as far as I’m aware,” I’m just saying I did an exploration, but it wasn’t public.
Maybe an approach to consider is to be clear and explicit about the trade-offs in the solution. This will allow people to make decisions about what is appropriate for their purposes and to have clear expectations.
Proposal: Lockfile PEP
Lockfile design MUST:
Record all transitive install dependency versions and hashes (sdist or bdist)
Record all transitive build dependency versions and hashes (sdist or bdist)
Result in byte-for-byte installation output iff install dependency versions for a target platform are all bdist and the lockfile was produced on the target platform (PEP 665)
Warn users if installation into a target platform involves an sdist and may fail and will not be reproducible
Lockfile design MUST NOT:
Guarantee reproducibility if there is a single sdist in the transitive closure for a target platform
Guarantee successful or correct installation for any lockfile that includes an sdist for installation on a target platform that is not the same as the locking platform. Users should be directed to instructions to produce the lockfile on the target platform or to prepare a wheelhouse or mirror that contains wheels for the desired target platforms.
Perform additional resolution or fetch files or hashes not already listed in the lockfile
Guarantee that it is possible to produce a lockfile if the transitive dependency closure (build or install) includes an sdist for any requested target platform. Users should be directed to instructions to produce the lockfile on the target platform or to prepare a wheelhouse or mirror that contains wheels for the desired target platforms.
I disagree with that — performing unstable resolution that might give different results on different runs is a problem, doing it at all is not. Besides, that’s fundamentally required if we ever want to have cross platform lock files.
Maybe I’m being imprecise in what I mean by “additional resolution”. I mean that “at point of installation, given an existing lockfile, no new files or hashes should be introduced that were not already expected for the target platform at the time the lockfile was produced.”
Yes. Currently the process is effectively pip install -r requirements.txt && pip freeze > requirements.lock. I’ve looked into using pip-tools and see as an improvement. However, I haven’t tackled the “how do we roll this out to hundreds of repos maintained by many different teams?” problem. Other priorities and seeing PEP 665 and this conversation put that work on hold. (I rather not have to manage the migration twice.)
I agree that a mechanism to centrally cache those built wheels could improve things. However:
I don’t currently know where those end up. If there is a deterministic location, is defined behavior or an implementation detail.
Retaining those wheels would mean the system doing the lock file generation would also need write permissions to the internal package index. Lock file generation is generally done by an engineer running a docker-compose service on their computer that starts up the target container and writes back a newly generated file. There are risks in having any engineer be able to upload wheel that would be used by every project in the company.
Years ago, we did have a system were people would have to explicitly pull packages into the internal index before they could be used. However, it caused lots of toil because things like:
People would not be aware that it was a required step for production.
One person had a wheel built, but a second person needed it for a different Python version. (pip will say “these are the versions that are compatible with your system”. It won’t say “there are wheels for that version, but not for Python 3.7, which you are using”.)
Because new versions weren’t automatically available, everyone never updated and fell behind the latest version of the package they were using.
So it isn’t to say that we can’t come up with a system that would automatically build wheels to put in the internal index for any package used in the company that doesn’t publish wheels. It is a situation where that would be hard and there are other things we could do that provide more value for the company.
Our current expectation is that the lock file is only guaranteed to work for the platform it was produced for. We aren’t having someone generate a lock file on a Mac and want that to be guaranteed to work for a Linux production machine. For example, we expect that a lock file generate Red Hat 8.6 with Python 3.10 will work when installed for Red Hat 8.6 with Python 3.10. Being that restrictive with our expectations makes the situation less complicated.
Having read the thread, I understand that there are other use cases where guaranteeing that the file will work on all platforms is desired. I am not saying that use case is wrong, but it is a different use case that we don’t require.
Brett answered accurately on my behalf, but I’d like to add to this bit.
This is basically correct. If I cared that the wheel doesn’t change, I’d build it once and copy it to wherever it needs to be - if I haven’t done that, it means I don’t care and just want it built. And if the build tools aren’t available, I want an error and I’ll go grab the build tools (or more likely, build it once and put a wheel where I need it).
And I virtually always build wheels manually - that is, not via pip - to avoid the unlockable dependency issue. It does mean scripting the build steps separately for each package, but at least I only need to look as far as the pyproject.toml to figure out what tools I’m dealing with, and usually in the CI configuration to figure out the real commands.
Point is, I view pip install <sdist via PEP 517> to be a convenience for those who don’t care that much about the build, and as I usually care about the build, I don’t use it So a lockfile that guarantees wheels match the hashes and sdists match the hashes but doesn’t say anything about the result of building the sdists is fine by me.
Hence the question of what are the expectations since that doesn’t matter to me. As long as there is a way to make lock files for multiple platforms, I’m personally happy. But for me that means they can be separate lock files.
Ah, that’s the difference. I have seen that often enough to want to (try and) support that use-case.
But do you care if building that sdist introduces new dependencies that are not in the lock file?
If you do care and this is entirely about punting on building a dependency, then here is an idea:
When resolving, have the builder return the metadata for any necessary sdists to get the dependencies; assume they are using markers to delineate platform-specific requirements for all platforms.
Resolve as appropriate, locking wheels and sdists directly.
When installing, all wheels get installed, then sdists are built and installed with no dependency resolution.
This means no dependency surprises at install time while allowing folks to continue to provide sdists that need to be built as appropriate. Without build toolchain pinning you do have to trust the sdist to not pull in anything horrible during the build, but at least you can know upfront what would end up in the final environment.
What you’ve described @brettcannon is what I’ve tried to describe in my proposal above. What you describe is what would meet my requirements. I’m happy with separate lockfiles for different platforms, but being able to produce them for a different platform would make users very happy (I do of course realise the many issues it raises with sdist). Poetry does do it, but perhaps without being explicit on the assumptions.
Only in that if they’re not listed in the lockfile, they won’t end up in the final result. So yeah, the tool that generates the lockfile needs to figure out what other dependencies are necessary (I deliberately left out a lot of ideas about how that tool might work, but I would assume it probably can’t do it without actually creating the environment first).
And yeah, it means assuming that sdists are well behaved (and if not, build a wheel first and reference the wheel). The problem becomes impossible if we allow for misbehaving sdists, so best to just define what “well behaving” means and not bother designing for things that don’t fit.