What do people want to see to make supporting WASI easier?

I’ve tried to make the development workflow for WASI easy so that people don’t get mad at me if their commit or PR breaks WASI. :grin: The main things I have done are:

  • Documented how to build for WASI at Setup and building
  • Gotten the commands necessary to get a WASI build down to py Tools/wasm/wasi build and to run the build via ./cross-build/wasm32-wasip1/python.sh
  • Gotten the toolchain into the devcontainer image so that you can use a container instead of installing the WASI SDK locally
  • Got it into CI

Now I’m wondering if anything else is needed by anyone? One thing I can think of is I can add to .devcontainer/ a wasi subdirectory which could get us Codespaces pre-builds for WASI if the SC approved the expense. You can see an example of this at cpython/.devcontainer/wasi/devcontainer.json at wasi-devcontainer · brettcannon/cpython · GitHub . Other than the very minor cost to the PSF this does increase the maintenance of the devcontainer configuration as devcontainer.json has no concept of inheritance, requiring a copying of any configuration changes between all devcontainer.json files (which is still minor since a relevant change hasn’t happened in 2 years).

Or does someone have some other idea of what would make things much easier for them? Another way to think about this is if WASI was made a tier 1 platform, what would you want to happen so you didn’t :roll_eyes: at that idea if the SC approved that (not that I’m proposing that right now, but it is theoretically possible based on the requirements of PEP 11 and the fact that the toolchain is accessible anywhere containers are)?

3 Likes

Thank you! You’ve been a great steward of the WASI builds.

Follow through on that.
The current docs don’t mention containers, but – wording chosen to match request for :roll_eyes: either using GitHub’s clunky Web editor, or connecting a proprietary editor to GitHub’s cloud.

Documenting this is not so much about writing the words, but the support burden, which (in Tier 2) would fall on you. To illustrate, my notes say that I’ve been using this:

podman build .devcontainer --tag cpython-dev

but since I wrote that note, the container files moved to a different repo. A move like that is something we won’t be able to do (easily) for a documented workflow.

Here’s a draft: Instructions to run the devcontainer locally by encukou · Pull Request #1568 · python/devguide · GitHub

1 Like

WASI as a technology is interesting, but I still fail to see wide spread use which would make it candidate for tier 1.

The current version of the specs is 0.2.5 and it’s called “Preview 2” with the disclaimer “It’s very much a work-in-progress”:

I’d opt for letting it stay at tier 2 until the specs have stabilized and reached the 1.0 version.

Regarding your question:

I think it would be better to offer a more standard container setup which does not rely on any external Github integration/editor/etc. to work.

I see that you already have a dockerfile ready for this, but to make it more useful outside the Codespaces environment, it would be better to have a simple docker compose.yml available, which provides a micro VM style approach and mounts the workdir on the host, so that you can:

  • easily edit files using your favorite editor
  • open a shell in the container to access the SDK
  • have a system running optionally, without polluting your standard setup

(I’m using such setups for working with other SDKs as well and it’s a good dev experience)

Here’s a simple docker compose.yml showing how this can work:

services:
  wasi-sdk:
    image: ghcr.io/python/devcontainer:2025.05.29.15334414373
    container_name: wasi-sdk
    restart: unless-stopped
    entrypoint: sleep 86400
    volumes:
      - ./build:/build

The setup would need some additional tooling, but it’s a start and easy to customize.

6 Likes

That’s fine since I already maintain the overall dev container anyway.

We already do; the images are available at Package devcontainer · GitHub .

I’m not sure what “widespread” means to you and I don’t want to get into it at this time. Just consider the tier 1 comment a thought experiment.

And images: Package devcontainer · GitHub .

Do you have docs you can’t point me at? I’m not a Docker user and have only done all of this at the request of others, so I don’t know what you’re asking for that goes beyond what @encukou proposes in their PR in terms of documenting how to launch a container. For instance, it isn’t obvious to me based on the example what it provides beyond hardcoding the image and a volume mount for something called “build”. Maybe you’re suggesting a mount for your Git checkout somehow? Or just the cross-build output, but then in that case how do you get the Git repo into the container?

1 Like

Beyond your PR did you want to see anything else?

No.
For building CPython, reproducing issues, and iterating on a fix, I’d say the container is fine. It’s comparable to a machine with an OS that I’m not quite used to, which I use for Mac & Windows.

I’d be surprised if WASI was made Tier 1, but, it wouldn’t get :‍roll_eyes: from me :‍)

My opinion, for the record, is that this is a specific workflow, and cpython itself doesn’t need to provide the tooling for it. Publishing the container is enough. But, if a core dev does want to maintain it for themselves and others, it should go in the devguide & the python GitHub org.
I feel exactly the same way toward GitHub Codespaces.

No, not really.

Here’s a short rundown of the typical workflow:

  • goal: you put all the tooling necessary for WASI builds into a container (so as to not mess up you normal environment)
  • you start up the container using docker compose up -d (from within the dir where you put the compose.yml file)
  • you can cd to the build/ mount and use this as work dir to e.g. clone the cpython repo (and any other repos which may be needed) for running the build, edit files, etc. from your normal work environment
  • you can run commands inside the container using docker compose exec -w /build wasi-sdk bash (directly from the /build mount)
  • once you’re done, you can then shut down the container using docker compose down
  • and, if you want to clean things up, run docker compose rm wasi-sdk

This setup works completely independent of any IDE setups and that was my point: there shouldn’t be a need to force usage of codespaces for this kind of thing.

Some extra notes:

  • The above looks more or less the same using podman, which is an alternative container management tool.
  • You can use image: ghcr.io/python/devcontainer:latest if you always want to use the latest version of the image. Upgrading can be done via docker compose pull with a stopped container.
2 Likes

It seems like somewhat like what @encukou documented in Setup and building , but using the image instead of building it and using docker compose to get the container going. Is that a fair summary?

And how is this better than say docker run --interactive --tty --volume ./workspace:/workspace --workdir /workspace ghcr.io/python/devcontainer:2025.05.29.15334414373 bash? Is it because the file embedded the image location?

And where would you expect this compose.yml to live? The CPython repo? Next to the Dockerfile? Or do you just want to document it in the devguide?

I don’t quite understand the “force usage” comment since there’s nothing special about the GitHub Codespaces support we have. It just spins up a container with free compute where CPython has already been cloned and built (and clones your dotfiles). The GitHub Codespaces setup exists primarily for newcomers at sprints so they can get started with nothing more than a web browser thanks to vscode.dev , but it is in no way a requirement to use it.

That assumes, though, that every container image is compatible with all versions of CPython. That’s true right now, but it might not be in the future (e.g. if the WASI SDK version gets pinned for a Python version). But I have been thinking about whether it makes sense to add a tag for the CPython version the container is targeting.

1 Like

Outragelously off topic, but not completely off, I tried a python
project that has a “dev container” setup, which that nefarious IDE
(wink) VS Code recognizes and offers to start automatically, but using a
podman setup rather than docker, and it went down a rathole. It appears
podman by default specifies multiple sources, and the dev container
stuff tries to helpfully ask which one you want, but when started
through VS Code, there’s no interactivity in that step, so it just froze
waiting forever for input it would never get. Am I nuts, or is this
possible, and if so, does anyone here know if there’s a workaround
(mostly Microsoft folk, I guess, given the combination of Dev Container
and VS Code)

I’m confused; why do you need Docker Compose? You should be fine just running the Docker image already provided.

I would file a bug on the devcontainer extension for VS Code.

I was actually thinking about this last night and one thing we could do is have a run.py in .devcontainer/ that does the docker run --interactive --tty --volume ./workspace:/workspace --workdir /workspace ghcr.io/python/devcontainer:2025.05.29.15334414373 bash command but gets the image location from devcontainer.json (or uses podman if that’s installed). That way people don’t have to look up image container locations or tags. We could use os.exec*() – or subrocess on Windows – to launch the tool. And the volume for /workspace could be .. so it launches with your checkout ready to go. This can obviously get a little fancy with CLI options if we really cared.

If we did do this I would move the few dev-specific tools that get installed via devcontainer.json to the container itself and thus make the differences as negligible as possible.

If we did this I would view it as providing the easiest way possible to get a shell in the container for folks who don’t know Docker/podman that well and don’t want to use the devcontainer.

Using docker compose.yml for this is pretty standard and also much easier to maintain, since you are documenting the setup using nicely formatted YAML code rather than hard to remember command line arguments. In addition, it allows configuring more than just one container and having them interoperate for a particular project.

I understand that the container image was developed as part of the Codespaces devcontainer setup, but for some reason people in this thread don’t seem to see the point that binding them to Github Codespaces is not a good way to “make supporting WASI easier”.

Codespaces are great for users who like to work directly in the cloud, but containers can very easily be run offline and locally as well, without any cloud connection, and we should cater to those users as well, IMO, and not make it looks like these containers only work in the context of Codespaces as the docs currently suggest (even though the section for building the containers locally actually does not require Codespaces at all).

A run.py tool may help, but you’d still have to explain where how the interaction between the container and the local dir works and if people want to customize this, they’d have to either edit the Python script or go with a compose.yml file.

IMO, it’s better to stick with standard docker/podman commands, since there’s lots of documentation on how to use these out there - and it’s really not hard to use. docker compose is deliberately easy to use and provides a wrapper around the complicated docker command line.

As for a place to put the compose.yml: you typically put this into your project directory (and the corresponding dockerfile into a docker/ subdir). This enables building images as necessary, when needed.

As for pinning the container image: yes, having tags which allow selecting the right container image for a specific Python version would be great. You’d then name this e.g. 3.14 or similar (and since can have multiple tags with each image, you can add tags for all compatible Python versions to each image).

Note that it’s no big deal to not have such images available either, since docker compose up -d would simply build the container.

Here’s a compose.yml version which does this:

services:
  wasi-sdk:
    image: ghcr.io/python/devcontainer:latest
    container_name: wasi-sdk
    build:
        context: docker/devcontainer
        dockerfile: Dockerfile
    pull_policy: missing
    restart: unless-stopped
    entrypoint: sleep 86400
    volumes:
      - ./build:/build

It assumes that you have run git clone git@github.com:python/cpython-devcontainers.git docker to download the docker files.

You can then force the build using docker compose build and you can go fully local by commenting out the image: line.

Anyway, I don’t want to write a docker compose tutorial here :slight_smile: and I have repeated my point enough times, I believe.

Perhaps we should just move the section “Building the container locally” to a new section “Contributing without Codespaces” and add another section “Using docker compose to run the container” to that section.

1 Like

I think the issue at least for me is I don’t “see” how the images are “bound” to GitHub Codespaces at all, but see below as to where I think the misunderstanding is coming from because I certainly did not design the container to be specific to GitHub Codespaces.

Ah, so that’s the issue you’re having: the docs aren’t better. That makes much more sense and I’m happy to improve them! But as I think I have said, I am not a Docker expert at all, so I did the best I could when I first created the docs (which I think were in preparation for a sprint, so I only had so much time).

I would probably have a --workspace flag that would let you set it to something other than ...

I think that’s one of the things I’m having a hard time understanding about the compose.yml approach is it seems to hard-code the volume mount. I read the Docker documentation for docker compose and I couldn’t find a way to override the mount in case you wanted a different directory. And if the file is checked in then it makes editing it a bit harder. Plus, if you already have a git checkout to get the compose.yml file, then having the mount be yet another directory for yet another checkout doesn’t make sense to me. Would it make sense to have the /build directory for out-of-tree builds and a /workspace directory that mounts the checkout? I.e. if we had .devcontainer/compose.yml then have:

volumes:
  - ../../build:/build
  - ../:/workspace:ro

That would allow doing /workspace/configure from /build to do an out-of-tree build, correct (which I’m inferring is what you’re after)? And then you could make the mount read-only for the checkout.

I did try searching for blog posts and such for using compose.yml for development and it was all about using them to spin up e.g. a test database rather than building a project using a container, so apologies about all the questions.

Yes, I agree; I was already planning on moving the container part out from underneath the GitHub Codespaces section once we were done discussing things here as I think that was an accident/oversight in the PR.

BTW, why the entrypoint: sleep 86400 line?

2 Likes

Because there’s no such thing as an --interactive service in Docker compose. The workaround is to make the service do nothing but keep itself alive so that the container isn’t shut down and then docker exec -it $container bash your way in.

4 Likes

I opened Add a docker compose configuration file · Issue #135693 · python/cpython · GitHub to track the idea of adding a .devcontainer/compose.yml file to the repo along with a proposed config.