Linking against installed extension modules

As of now, it doesn’t seem like there’s any specified way to link against another extension module, meaning you can’t expose symbols from an extension to other extensions.

The current solution is to use capsules, which for those of you unfamiliar with the C API, simply transports a void*. To build a public extension API right now, you give an extension module a capsule object, and then write some header files that import your module, extract an array of function pointers from the capsule, and then populate local function pointers.

The biggest problem with this approach is that you force developers to use those headers. A Rust program, for example could not really use your API (without quite a bit of reverse engineering, at least) since the symbols used are not actually exported in the compiled object. They would have to manually rewrite your initialization function and figure out how to use functions extracted from a void** (including writing all the typedefs as well).

It would be much nicer if extension modules could directly link against another extension’s shared object file. For example, with setuptools, maybe it could look something like:

Extension("whatever", […], py_libraries=["some_installed_extension"])

I think part of the problem is that there’s a limited number of people who work with both the C API and packaging. I know very little about packaging specifications, so I would be happy to hear issues with this approach and/or previous discussions about this topic.

1 Like

Can you give an example where this is required?
Without an example to discuss I’m not sure why this is useful.

This problem came up when developing a C library for using asynchronous Python functions from C (see this dicussion and this discussion). The solution right now is to use capsules, as shown above, but that limits the library to being used only from C (and C++, probably, but I’m not totally sure).

I read the links discussions. FYI have not tried to write async api from C myself yet.

Why is linking against an extension modules .dll or .so any help with the async function problem that you attack in the linked discussions?

Take my pysvn extension as an example, you would be in unsupported world if you messed with its internals. Without designing it to be called from C how could this work? But you can use it via is public python API.

If the convention was to link against a shared object, then extensions would rather export symbols instead of the capsule gymnastics. In my case, the previously proposed implementation is being turned into a library instead of being added to CPython, per a core developers request. Ideally, it shouldn’t only be usable from C, as lots of extensions are written using PyBind11 or PyO3 nowadays.

I feel like I am missing something important, sorry if I am asking dumb questions.

Capsules are a way to move a void * blob around the API of a single extension.

At no time is the contents of the void * of use to anything else right?
Also I do not understand why that involves async as a special case?

PyAwaitable isn’t a special case here, it’s just an example. This is about general C extensions that would like to expose a public API to other extensions. Capsules can be used to move pointers around a single extension, but they are still Python objects, and can be added as a module attribute. A separate extension can then import the original module, get the capsule, and use its stored pointer to access functions. This is how NumPy does it’s C API, for example.

Aha! I see what you are asking for with the numpy example.

1 Like

The (most obvious) problem with linking directly to another package’s extension module is that our packaging system does not allow specifying dependencies closely enough to be sure you’ll get something that matches after installation. This is why wheels tend to bundle copies of dependencies rather than linking to another package (like in conda, where the dependencies are specified well enough).

Isolated builds also prevent you from doing this when building at install time, as your build time dependencies may differ from the existing dependencies in your target environment.

It can work if the dependent package is reliable enough and careful enough with their public API that they promise not to change it basically ever, or have an extensibility model that lets them manage it (and the processes to actually manage it, not just to say they’re going to manage it). Basically, it’s a large burden on the package you want to use, in order to reduce the burden on you, the user.

Numpy’s approach is probably the easiest way to scale and maintain a public native API. If they want, they can add a new attribute for a new version of the API (personally I’d have made it a function rather than an attribute, for deprecation warnings/lazy initialisation), and they can ensure that a particular API object is aware of the current interpreter, module state, and any other not-quite-global state that matters.

Another approach might be to replace the single capsule with an API object that lets you request certain entry points (e.g. by name or some unique identifier), so that rather than a C struct, you would request each function individually and only deal with a single function pointer.

But ultimately, it’s more work and maintenance for the implementer than the consumer. Make it worth their time :wink:

(There’s no doubt some useful further reading at https://pypackaging-native.github.io/ though I don’t recall if there’s anything specifically addressing this scenario.)

3 Likes

Any other ideas then? NumPy’s approach is easy enough from their end, but in general, trying to extract pointers from a capsule is absolutely painful if you can’t use the defined headers.

Or I guess one more idea would be to contribute bindings for whatever language you want to use that doesn’t involve the headers?

1 Like

Contributing bindings is pretty much what the situation is right now. It works, but I’m not sure it’s ideal. Python is the only language that I know of to pass around an array instead of exporting symbols.

An entry point API like you suggested sounds promising, but it might also introduce a new nightmare for version management. Versions that don’t support a new API wouldn’t be EOL for several years, so many libraries will be very hesitant to pick up before then. If we implemented some better system in Python packaging itself, it could be adopted much quicker.

Not disagreeing, but since it’s a huge lift to try and convert the packaging ecosystem to support ABI-level dependencies between pre-built packages, I figured I’d focus on how you could solve it today. There are thousands of posts here about how to solve the future (though every case involves a nightmare for version management - hence my insistence that the project with the API needs a process for changing the API, and the best process is “don’t”).

What about explicitly specifying packages with extensions to be installed before the build process (such as in build-system.requires)?

Is that not already a problem when using NumPy’s C API via the capsule?

You already have an ABI dependency if you built against the NumPy headers and then call the numpy functions at runtime regardless of how you get access to the functions.

The problem as I see it with linking directly is that the dynamic linker knows nothing of Python’s sys.path. How would dlopen (or equivalent) know to find the numpy shared library during the import of my_dependent_module.so?

It works out okay if you’re also installing the dependency, because you’ve got a slightly better than average chance of getting the same version. But if you need to work with whatever is already installed, you can’t guarantee you’ll get it at build time. Which means you’re back at hoping that they’ve been looking after their API and ABI.

Strictly yes, though a struct layout is slightly more portable than module exports (and less likely to conflict), passing through a runtime capsule is slightly nicer than a load-time failure, and Numpy does have the option to generate different capsules for different contexts (e.g. subinterpreters).

I’m not sure the best way to do this on POSIX, but on Windows the way is to import numpy first and then your native dependency on numpy.whatever.pyd resolves to the one that’s already loaded. It’s possible that this is such a problem on non-Windows that it’s the main reason Numpy went the way they did.

C/Native APIs tend to try to not add breaking changes anyway, so I think conflicting versions isn’t something the installer should have to worry about.

In theory, yes. In practice → Depending on packages for which an ABI matters - pypackaging-native

Isn’t there more chance for collisions with capsules? A package could ship a header file for 1.0.0, but the module is defined with the capsule in 2.0.0, and there will be no good way of knowing, unless both the header file and capsule explicitly have a something exposing the version.

To be clear I am not any kind of expert on these things. My question was a genuine question rather than a statement that this is not an easily solvable problem.

2 Likes