Let’s continue this sub-topic in Formalize typing support for dynamic class change instead.
What do you mean by memory safety? Is this not memory safe?
Is this not memory safe?
import ctypes def dereference(address: int) -> object: # Let's hope the address is valid! return ctypes.cast(address, ctypes.py_object).value x = "hello" print(dereference(id(x))) # hello
Well, of course not.
idreturns anint, not a pointer. CPython casting to integral is a implementation detail.- Garbage Collector is free to collect garbage. After
idcall reference count could reach 0. - If pointer is longer than integral type used (for casting), pointer would be trimmed.
So in short there is no way to do this in a “safe” way? Unless you use cpython and keep a reference to the object ![]()
My use case is that I would like to send a reference to an object on a socket to another thread in the same interpreter. The alternative would be to pickle the object and send it, but that seams overkill…
My use case is that I would like to send a reference to an object on a socket to another thread in the same interpreter. The alternative would be to pickle the object and send it, but that seams overkill…
But why do you need to send an object over a socket in the first place when objects are shared among threads in the same interpreter?
Queue class provides a thread safe way to send objects between threads, but if you also want to be able to receive messages from the outside (network events, file events, keyboard/mouse etc.) you need a socket. So if my thread need to wait for both kinds of input, I need to poll the two. I guess that why it is difficult to
I would like to have a SocketQueue class, where the thread wait for any input including a reference to an object, decoded as a string on the socket. So it works like Queue (which can .put() a reference to an object) but can also wait for file based events.
I see. I would create another thread to receive serialized objects from a socket and put them into the queue that other producer threads can also put objects into. Your worker thread can then focus on just processing incoming objects from this one queue.
Yeah, that’s how I do it now. But, it requires two threads per queue and closedown is a bit undefined. But, yeah it is probably the way to continue…
So in short there is no way to do this in a “safe” way? Unless you use cpython and keep a reference to the object
Well, technically… it’s sort of possible in CPython without keeping references.
You can access PyMem_SetAllocator through the C API or ctypes, which will allow you to hijack the object allocator functions. If you really wanted to, you could store addresses that you allocate and then look them up somewhere else (and invalidate those addresses when they’re freed).
I don’t recommend doing this at all, though. You’ll almost certainly kill performance and end up with thread-safety headaches.
I marked one of the responses as answers. we can move on from this discussion.
i don’t want to hear how bad my suggestion was
also please help me in Can structs be made using class decorators