A new PEP to specify dev scripts and/or dev scripts providers in pyproject.toml

Hi :wave:

I’ve been working a lot on how to optimise my workflow (and my teammates’) and each time I’ve been trying a solution I’ve found myself stuck by the lack of tooling standardisation on the development process (where are the scripts, what should be installed first, launched fron the virtualenv or not…) and lately, instead of trying to properly integrate everything my way, I concluded that I should contribute some PEPs but I’m wondering where to start

So, here I am, proposing a first idea. This is a first time to me, any help, any constructive comment… is welcome. Don’t hesitate to redirect me to the proper communication channel if I’m posting at the wrong place.


A proposal for exposing development workflow and housekeeping scripts in pyproject.toml


As soon as a project is growing, there is always the need for:

  • some housekeeping scripts
  • some specific launch scripts to run tests, lints…
  • some helpers for common tasks

Given pyproject.toml is becoming the new standard to expose metadata, dependencies… (See PEP 621 and other PEPs related to pyproject.toml, thanks @brettcannon ), it seems to be a logical place to also expose those scripts.

Having a standard way of exposing development scripts/tasks would allow:

  • developpers to knew where to look when onboarding on a project (easy discovery)
  • tools to be able to integrate those script easily
  • language-agnostic tools to easily discover and expose some tasks (like pre-commit)
  • optionaly being to expose the required dependencies to run those

Given some tools build dynamically there task list (invoke, nox…), the exposition mecanism should allow dynamic tasks/scripts providers.

Note: this is not meant to replace the packaging entrypoints (ie. project.scripts, project.guiscripts…) which are here to describe your package delivery and target product customers while this specification try to address the developpers needs on the product.


The specification does not make hypothesis on the launcher itself, it cvan be a dedicated tool or your package manager like poetry, flit… It will be represented as $launcher in the command line examples.

Manual scripts listing

This mostly copy the npm [scripts] section as it is well known and simple.

Manual scripts are exposed as key-value pairs under the scripts sction.

The key is the command line argument expected by wahtever launcher that would use the section.

The value represent the actual command line executed.

Note: npm provides an interesting mecanism allowing to reference node_modules binaries without the node_modules/.bin prefix. Given most tools relying on pyproject.toml also provide virtualenv/venv integration and management, I believe the virtualenv bin path should be added as first $PATH search entry to easily pick installed dependencies executables

lint = "flake8"
test = "pytest tests/"

Launchers should allow extra command arguments to pass through, either as added parameter and/or by using the double-dash arguments.

So, in our case, invoking: $launcher test -k my-slection would resolve as pytest tests/ -k my-selection and $launcher test -- -k my-selection into pytest /tests -- -k my-selection


When command list is growing, sometimes namespacing command can help the discovery.

Namespcaing is done by declaring a dictionnary instead of a string.

root-cmd = "a root command"
  build = "my doc build command"
  publish = "my publish build command"

  unit = "my unit testing command"
  integration = "my integration testing command"

This would provide the following commands:

$launcher root-cmd
$launcher doc:build
$launcher doc:publish
$launcher test:unit
$launcher test:integration

note: namespace separator to define, I arbitrary chose : as I like it.

Dynamic script discovery

When using script provider, the provider entrypoint should be exposed under the scripts.providers section:

invoke = "invoke.program:PyprojectDiscovery"
another = "path.to.another:EntryPoint"

This syntax allow multiple providers. The key is non-significant, only here for documentation.

Each provider is allowed to provide namespaces.

This is the base idea. I volontarily not been into details of the endpoint specification for providers as there is some questions to answer before:

  • is it meant to be integrated by package managers or not ?
  • what would be the data needed to be exposed ?
  • multiple providers or just one ?
  • do we specify the provider dependencies here like it’s done for the build-system and its require or do we simply expect to have the dependencies installed before and the provider beeing one of them ?

What do you think ? Should I continue or should I stop right now because this don’t have any chance to be accepted ?
What would be the next steps to continue this work ?

I forgot to give some use-cases so here are those I’ve imagined.

project managers integration

Given I use a lot poetry, invoke and tox (been trying nox recently too), I would love to be able to write:


invoke = "path.to.invoke:EntryPoint"
tox = "path.to.tox:entryPoint"

and being able to call poetry run test where test is an invoke defined task. Same for tox (poetry run tox to run tox wihtout argument, poetry run tox:test-38 ti run a specific task).

If, like it’s done with npm, tools handle some special tasks, we can even imagine having poetry test being an alias for the special test tasks (ie. being equivalent to poetry run test)

IDE integration

This would allow IDE (with proper support) to easily discover the development task. We can imagine extensions like Task Explorer - Visual Studio Marketplace discovering the task or PyCharm having a clean integration.


I love pre-commit but I hate having to duplicate some scripts/tasks declarations.

This would allow declaring pre-commit tasks with something like:

  - repo: pyproject
      - id: lint
      - id: some:task

Where each ID is a task defined into pyproject.toml or by a provider.


I would love being able to describe a github action (as example) like that:

- name: Lint
  use: my/pyproject/action
    task: lint

Here some existing posts that may be related. They don’t exactly share the same scope but they share a common goal: dev tooling integration and standardisation.
Those are old thread in which I didn’t post to avoid necro-bumping them:

Thanks for the idea! I have thought about how standardize projects writing down how to create an environment, run their tests, build their docs, etc. The tricky bit is being flexible enough for projects while making it useful as something other than a set of shell scripts to warrant creating the standard and getting people to use them.

For instance, how does one provide different commands per OS for e.g. test:unit? How does an editor use this command to get the test results (or even the list of tests)? What if your commands have optional dependencies that must be installed first? And this doesn’t even touch on shell differences (e.g. differing escaping rules).

The idea of having tools like Tox, Nox, Invoke, doit, etc. provide a Python entry point which lists their available commands is interesting, but what about optional arguments to those commands?

When thinking about this the farthest I ever get is to somehow tie an optional-dependencies array to a command. Now the “command” should probably be an entry point. The tool would then be expected to read its configuration information from somewhere else so that all you’re really doing is invoking the tool with the appropriate things installed and leaving it up to the tool to configure itself. It’s still not the cleanest solution for e.g. editors that would need to integrate with pytest or unittest, but it’s something.

Otherwise you’re starting to need to go down the PEP 517 route of defining an API that tools can implement to get the full richness that some specific domain like testing may need.


Thanks for your response and for the insights.

I carefullly read the PEP-517, interesting. I’ll dig those insights more by the end of the week.

At first sight, I can see 4 possible levels:

  • basic discoverability: some basic scripts into pyproject.toml like this is done in package.json, basically what’s exposed in the initial post
  • specific scripts with meaning: in npm those are basically the test command as well as lifecycle hooks, just an extension of the previous point. We can also specify some expected behavior by script (it needs to have this optionnal param, the exit code…)
  • Rich query/call API: this not a script anymore but an entrypoint (like a pytest one for the tests as exemple) which exposes some extra features/capabilities: not only I can run the test but the entrypoint expose everything for tools being able to call with refinement (run only this test, mimic the -k behavior of pytest… API to defined)
  • Rich execution result: instead of simply outputing strings to display, the output follow a specific format to be defined, (I think by hook/script type, ie. test output junit xml, lint use a comma separated output… this is only an example)

In my initial proposal I only addressed discoverability without any other expectation as it was already bringing value to me, but I’ll dig the other points to complete the proposal and taking in account what has already be done on this PEP as well as the other threads from the post above :point_up:

All this is a problem space tox/nox/invoke is addressing directly. So not sure why we need to do anything here. Other than perhaps have a unified configuration for those tools? But at that point why not just add a dev-tool bootstrap before and let each dev tool use their own DSL to define and maintain tasks. In the past when I’ve given pyproject.toml a go with tox my conclusion was that TOML is a very verbose mode to define stuff compared to e.g. an ini file… I don’t think we can really unify configurations for tox/nox/invoke. Last time this came up (a bit over two years ago) the conclusion was that we probably want to standardize a subset of tox, but that would require a clear explanation what/how the frontend (in this case tox/nox/invoke) needs to do.

In my experience adding the virtualenv script path as is to the existing PATH is not enough. Sometimes you want to enforce it must come from there or nowhere else. Also note that the python executable location and the virtual environments scripts location doesn’t always match, so you need to add at least two paths.

Who would use these commands? What are the build/install/environment variable dependencies of those commands?

What is a provider?

1 Like

I’m the author of Poe the Poet which aims to solve part of this problem; managing “scripts” or “tasks” defined in pyproject.toml as commands or references to python functions, and running them in the appropriate environment.

As @bernatgabor noted, I can’t imagine a single configuration schema that would fit comfortably with many different tools in this space beyond basic use cases.

But what I do find quite interesting is the idea of a standard way to declare which tool should be used to run common tasks (as least in their canonical form), and an interface specification for how the task runner and environment manager components can be programatically invoked to work together.

I think this would address part of the core problem that initially motivated my work (and a few similar projects) in this area, i.e. providing a convenient way to manage and run dev tasks in poetry projects, where the default solution would usually otherwise be a Makefile with calls to poetry run.

For illustrative purposes I imagine something along the lines of the following:

requires = ["poetry", "poethepoet"]
runner   = "poethepoet.runner:main"
env      = "poetry.env_provider:main"
tasks    = ["test", "lint", "format", "build"]


  1. env_provider conforms to env_provider(project_dir: str) -> EnvDetails
  2. taskrunner conforms to taskrunner(project_dir: str, env: EnvDetails, task_name: str)
  3. EnvDetails is a path or whatever works best for identifying a virtualenv or base_prefix to use.

This would make it possible for any CI tool, IDE plugin, or other script to check which tasks are implemented for the project, activate the required environment, and run the tasks.

It would also allow users to mix and match tools for defining their dev/prod requirements (e.g. poetry or pipenv, tox) with tools for running their dev tasks (e.g. poethepoet, invoke, tox).

Maybe it would make sense to also pass a dictionary of options for the task, though I’m not sure how easy this would be to standardise across task runners.

Am I on topic?

1 Like