Scripts with PyPI dependencies being a mechanism for distributing apps

A thought popped into my head while thinking about PEPs 722 and 723: I see a future where these simple scripts with a small list of dependencies embedded in them being a (possibly the primary) vehicle for distributing applications with Python entry points.

An example such script doesn’t need to be complicated:

# Dependencies:
#     awesome-app

import awesome_app

if __name__ == "__main__":
    awesome_app.run()

Which could be ran as:

pip-run https://static.example.com/apps/awesome/v2.py

To me, after installing Python and a script runner (in this example, pip-run), there’s very little friction to running arbitrary Python apps [1].

I think this is a good thing, and I don’t believe this should delay accepting PEP 722 or 723 [2], but I’m not convinced I have thought of all the potential problems with this future. Can anyone else think of any?

The one concern I have is right now PEPs 722 and 723 don’t support including hashes. This could mean misconfigured environments are susceptible to confusion and man-in-the-middle attacks. I don’t think that’s unique to apps, and things like HTTPS and explicit index configuration are there to prevent.


  1. from the command line. Not much of a leap to distributing and running via GUIs ↩︎

  2. I’m in the “why not both” camp regarding those two proposals ↩︎

2 Likes

It’s certainly possible that people could choose to package their main logic as a library, distribute that library via the usual package-index mechanism, and then distribute a driver script separately.

I don’t think that entertaining this use case requires any special consideration; it falls out automatically from what is already discussed.

However, I’m not sure why anyone would want to do this, as opposed to using the existing facilities for generating entry points (possibly including existing utilities like the one I tried making once). It just changes the end user’s workflow from “pip install awesome_app (or perhaps pipx instead), then use the entry point described in the documentation” to “grab the driver from the author’s source, then feed it to a script runner that will use pip implicitly”. I think this could only really make sense in a world where a standard script runner ships with Python (or even, say, a --in-own-venv flag for the interpreter), but I can’t see that happening any time soon with current attitudes towards how the community is organized.

I don’t think this counts as a point for or against either proposal.

This is certainly part of the motivator for Rust’s support, as per @epage (except not runnables, but enitre libraries in a file).

PEP 722: Dependency specification for single-file scripts - #179 by epage text*

We can currently do this by awesome-app declaring a suitable entry point, and then you can pipx run awesome-app. There’s no need for PEP 722/723 or a separate driver script, and I’d actually discourage people from doing this as it gains basically nothing over existing solutions.

The difference is that it’s a single script rather than a PyPI package/wheel. It’s functionally the same model, except allowing you to use it for scripts over the network.

I don’t think we need to discourage it, but it’s certainly not the same set of capabilities as a Python wheel.

1 Like

Maybe I’m misunderstanding. It seemed to me that the proposal was to distribute the app functionality in a library, and publish a script as a trivial wrapper for that library. In that specific case, using an entry point in the library rather than a standalone script that downloads and calls the library, seems significantly better.

But yes, if the script has significant functionality beyond just calling one library, that’s different. I just don’t know how you draw the line between “script that uses some libraries” and “script that is a frontend to an app published as a wheel”.

Discourage might be too strong. “Let’s not try to make this functionality fit things that don’t really need it” is probably closer to my view.

Just for a concrete example. black circa two ish years ago could’ve fit the bill.

The entire thing was in a single file: https://github.com/psf/black/tree/4fc1354aeb6b217cd18dbdb2a0c41373fa9d8056/src/black

I don’t see the use-case myself to be honest.

If one is looking to formally package, version and widely distribute software applications or libraries, then there is already broad support (that is always improving). It’s not the number of files that makes the process more or less difficult here, it’s broader concerns such as backwards compatibility, PyPI account security, CI, testing, OSS licensing, collaboration, metadata, versioning etc.

I don’t think a new packaging standard needs to emerge for single file scripts. The sweet spot for these is not broad distribution. It’s for local scripts for the 80% case and then maybe a 20% case where you’re emailing or sending a script to somebody to unblock THEIR local problems.

In this sense, they’re more like jupyter notebooks. Scrappy, messy, unstructured, quick-but-useful code.

Sure, notebooks or PEP 722 scripts could be abused for broad distribution of software, but that wouldn’t be healthy for the broader ecosystem in my opinion.

This is the more pertinent use-case. For example, the script could install system dependencies, or it could be the entire command-line front-end to a library. Right now, it’s easier to distribute apps as libraries when using Python packaging.


I’m not proposing anything here, just looking to see any problems with, or any ways to help facilitate, existing functionality.

1 Like