Multiple simultaneous async REPLs

I would like to be able to have multiple async REPLs running within the same interpreter. Each async REPL would be bound to a tty which is sent to it via a Unix-domain socket. This would be very handy when working with a large, multi-faceted, long-running daemon.

Unfortunately, I have identified five key obstacles to this:

Only one sys.std{in|out|err}

This turns out to be relatively easy to fix with a file-like proxy that forwards via a contextvar, though perhaps a more elegant/performant solution could be found, especially as printing to different places by default in each context has obvious broader usefulness.

Only one _

I suspect this could be attacked similarly by making sys.displayhook a proxy to a per-REPL contextvar that writes to that context’s globals()._but it turns out to be very hard to test this without a working multi-REPL framework in which to try it.

Only one readline

Before 3.13 this appeared to be an almost insurmountable show-stopper: GNU readline only supports one instance per process. The two least-bad options would be to reimplement readline from scratch or maintain a separate process instead of merely a separate thread for the REPL’s main loop. Neither of those seemed attractive, so I shelved my idea.

But 3.13 brought the shiny new REPL with a new, far more capable, input handler implemented natively in Python. Sadly that, too, appears to expect only one instance to be running. It might be easier to hack on, but there would need to be support for integrating those changes back into the _repl module so the hack didn’t diverge.

The asyncio module doesn’t expose its REPL

As noted in this previous topic, most of the logic behind python -m asyncio is contained within an if __name__ == '__main__' beyond the reach of easy use from other Python code. This is a pity since it would be nice to be able to stably reuse the logic for running the REPL in one thread, but invoking any code executions against the running program to another (the main) thread.

It’s not much code, so it could be cloned and hacked on, except that since…

The _repl module has no documented/stable API

…any custom code derived from python -m asyncio or written along similar lines can’t stably invoke the _repl module; the asyncio module is closely coupled to it.

So…

The dream would be for the asyncio module to provide an await asyncio.repl(tty) function that manages the entire process of launching a REPL on that TTY and running it to orderly completion.

Absent that, the minimum viable product would be a documented interface to _repl that supported multiple instances.

Am I a freak for wanting this, or is it an itch other people are also scratching? The code module is recognised as useful enough to be worth exposing; it feels like there should be similar support for async REPLs and supporting multiple simultaneous instances is very much expected in such an environment.

2 Likes

cc @ambv

If you ask me, less global state is good :ā€)
I’d support decoupling REPL from ā€œtheā€ stdin, stdout, rlcompleter, etc.

1 Like

We are super not ready to make _pyrepl a supported API with a backwards compatibility policy, but making it fit the asyncio use case better by removing global state sounds like a project I would support within the context of the current REPL implementation for Python 3.15. To be clear, this wouldn’t be backportable to 3.13 and 3.14.

3 Likes

Consider reaching for aioconsole — aioconsole documentation instead. I’ve routinely had multiple REPLs running in production services deployed in the cloud. :wink:

2 Likes

Honestly, that feels like it’s handling the simple part: prompting the user for a line of python and evaluating it. For readline, it suggests using rlwrap in the client, which does provide history, but not any Python autocompletion, let alone all the wondrous new indentation, syntax colouring etc. that Python 3.13 brings to the party.

That’s from reading the page you linked to, though; I accept it may have capabilities not mentioned there?

1 Like