Setting global variables with multiprocessing.Process

I’m not quite sure how to accomplish this.

See code below as a very brief example of what I’m trying to do:

from multiprocessing import Process

a = 0

def show_var():
        for x in range(10):
                print(a)

def set_var():
        global a
        for x in range(10):
                a += x

if __name__ =="__main__":
        #start threads
        Process(target=show_var).start()
        Process(target=set_var).start()

I would expect that the show_var() function would show some integer over 0 at some point, but it never happens, no matter how long I run the for loop.

Is there a way for the set_var() function to modify the global variable “a” and show_var() recognize this change?

Basically no; that’s not how subprocesses work - they’re completely independent. So you have a few options.

  1. Design your app to return data when it’s done, eg with a process pool and mapping over your jobs
  2. Switch to threads, particularly with a view to per-interpreter GILs and per-thread interpreters
  3. Redefine your idea of “global variable” in some way, for example, writing something to a file, or using a pipe to communicate with the original process.

I’d recommend looking at threads, they’re usually the best option for this sort of thing; but without knowing more about your intended code, it’s hard to judge.

The only other thing I didn’t include in the code is that this also has socket communication in the mix. There’s a loop that starts a network socket that listens for commands from another program, when communication happens, it restarts the loop, ready to receive more input if needed. Then another process (monitoring) constantly loops and compares any new information against some sensors being actively polled.

My expectation was that changing a global variable (via this network socket) would have prompted the monitoring loop to change parameters and react accordingly.

But as I experienced (and you explained), that doesn’t seem possible.

I looked at using Queues, but I’m not sure that would work, as socket communication would hold the queue up waiting for network activity, all the while monitoring would need to be happening and can’t.

What would work is if network activity came in, the socket closed, then monitoring would pause while the global variable (or something like it) is updated. I don’t know which tool would best serve this.

I also want to add this doesn’t need to be microsecond accurate, I made it sound more scientific than what it is, I’m just cooking food with it.

If you need something like this, you can use something like multiprocessing.Manager:

from multiprocessing import Process, Manager

def show_var(a, lock):
    for x in range(10):
        with lock:
            print(a.value)

def set_var(a, lock):
    for x in range(10):
        with lock:
            a.value += 1

if __name__ =="__main__":
    with Manager() as manager:
        a = manager.Value('i', 0)
        lock = manager.Lock()

        p1 = Process(target=show_var, args=(a, lock))
        p2 = Process(target=set_var, args=(a, lock))

        p1.start()
        p2.start()

        p1.join()
        p2.join()

Sometimes when I run this, I get all 0s, sometimes other things… all depends on timings and luck.

The lock is there since technically we are accessing shared memory and should lock to prevent concurrent access to the var.

I’m very curious about this! Do please go into more detail about how Python cooks your food - I’m sure I’m not the only one who’d love to hear!

Anyhow.

Sockets are perfectly valid IPC, so that seems like a reasonable structure to start with. Can you show a snippet of your code, showing the structure of different processes and communication? Depending on which part of the app handles the socket, that might be your easist solution.

I’ll do one better, this is going to be an open source meat smoker project anyway, I’ll wholesale post the project in it’s current state. Be easy on the judgement, I only occasionally will code a project in Python.

That’s not necessarily better - a minimal example is way easier to discuss :slight_smile: But I’ll have a squiz, see if I can make any sort of useful comments here. I spy a Raspberry Pi.

Alright, so I think I understand your control flow. Please correct me where I’m wrong.

You have two entirely separate subprojects here - one in Python, one in PHP - which communicate via a socket. (As a side point, I would strongly recommend simplifying down by removing that distinction, if possible. You could have a web server inside Python, either to do the entire job directly, or to be proxied to from something like nginx rather than having it run PHP. That would save you a bit of trouble.)

Inside Python, you currently have two subprocesses, and a bit of a weirdness strikes me right from the start: you initialize the Pi, then spawn two subprocesses, and then let the original process terminate. Maybe that’s not a problem, but it seems unnecessary and might cause confusion as to which process owns which resources.

But it definitely looks like this is a relatively low-traffic system. It doesn’t need to be multiprocessed. There are, if I’m understanding correctly, two core loops:

  • com_loop listens (btw, you shouldn’t need to call listen() inside the loop - just call it once, then call accept() repeatedly) for a socket, reads from it, handles a command, and then responds and closes the socket
  • temp_loop constantly checks the temperature of the pit, and reacts according to the parameters it’s been given, which can be set by commands.

(BTW, listtostring should be able to be replaced with a simple join() call - ";".join(stuff) will be just like you’re doing, only without the trailing semicolon, which you can add back at the end if needed.)

There is one small issue I’m seeing here, which is that the temp_loop never seems to block at any time - it’s permanently spinning. The temperature check (check_pit_temp) won’t take long, and then heat() makes its decisions, also not taking long, and then it’s back to checking the temperature. You could insert a sleep into that loop without materially changing the behaviour (time.sleep(1) for an entire second, or more or less time than that depending on your needs), and it’ll mean your Pi isn’t trying to spin constantly, so it’ll use less power.

Importantly, once you have both loops spending most of their time waiting (the command loop waiting for another socket, the temperature loop waiting for the next check), you can make them into threads without changing any of the rest of your code. Just replace the bit at the end with:

Thread(target=com_loop).start()
temp_loop()

Now you have a main loop and a spun-off thread, running inside the same process, and all your globals really will be shared.

That’s the easiest change to make. If you want to think about some other changes, though, here’s what I would recommend considering:

  • As mentioned above, roll the web server into this script. That’ll save you some hassle.
  • The socket loop will get “stuck” if ever the other end breaks. It’ll eventually time out but it will be unable to handle commands until then. One solution would be to have each incoming socket handled independently, as its own task (potentially its own thread).
  • But threads aren’t strictly necessary here! What you REALLY need is just independence of tasks, all of them mostly waiting. That’s a perfect job for asyncio.

In a rather cool coincidence, my brother is currently working on his Raspberry Pi project called BioBox. It’s built around an asyncio event loop, and it manages a motor slider (so it’s repeatedly checking the resistance and also sending signals to the motor), with other messages coming from a GUI, a websocket, a plain TCP socket, and an SSH subprocess. The architecture is very similar to a threaded program, and fairly similar to what you’re doing here with subprocesses, but it’s all done in a single thread for efficiency. Feel free to check out his code (I’m also sending him yours), and if you’re curious, you can swing by his Twitch stream to talk about it - he works on this project once a week.

Gotta say, your project looks very very cool. Looks fun.

1 Like

It nailed the main problem. Only a few other issues came out. Chief among them were two functions that fought over reading from the ADS at the same time, it would crash one of the threads. Fairly easily fixed by converting the function to read from global variables instead of querying the ADS chip, setting the check_temp() function to set the global vars, then run check _temp() in tandem with the heat() function.

I def want to improve the quality of the code and trim out unnecessary bits of code, but right now I just want to get things working.

PHP is my strongest language, I can replace it with Python, but may ask for some help once I get a proof of concept working.

That sounds like a smart way to do it!

Makes sense, and I’m sure we’ll all be happy to help out here.

what about this :thinking:

save into file set_var.py

a = 0

def set_var():
    global a
    for x in range(10):
        a += x
        print(a)

set_var()

and then run

import asyncio
import sys
import os

async def show_var():
    proc = await asyncio.create_subprocess_exec(
        sys.executable, 'set_var.py',
        stdout=asyncio.subprocess.PIPE)
    print(f'set_var pid {proc.pid}')

    print(f'show_var pid {os.getpid()}\n')
    data = '#'
    while data:
        data = await proc.stdout.readline()
        if data:
            print(data.decode("ascii").rstrip())

    # Wait for the subprocess exit.
    await proc.wait()
    print(f"retcode of 'set_var' {proc.returncode}")

async def main():
    async with asyncio.TaskGroup() as tg:
        task = tg.create_task(show_var())

asyncio.run(main())