Native dependencies in other wheels -- how I do it, but maybe we can standardize something?

I’d like to discuss mechanisms for python packages/wheels to depend on native dependencies that were installed by other python wheels. Specifically, I’ve already solved this problem in a custom build tool and I think the way it’s implemented would be useful for other projects if this pattern + implementaton details existed in some standard/standalone form (maybe as part of packaging, for example?).

If there’s interest from the community in this topic, I’d be happy to work with others to make this a standard mechanism to use native dependencies across wheels. I’m also interested in hearing how others have solved this problem in different and/or similar ways.

Sidebar: Why not conda?

I’m not a data scientist, and I don’t use conda myself. When I first started going down this road in late 2019, my work colleagues who did use conda would often complain about issues they had with using conda, and the few times I had to interact with conda I didn’t have a particularly good experience.

I do recognize that Conda solves a lot of these problems that I had to deal with, but my customers are high school students and their (often non-software engineers) mentors. If my colleagues at work are running into a variety of issues when they use conda, there’s no way I was signing up for that support headache.

A lot of people use Conda to solve the binary dependency problem, but a lot of people can’t or won’t use it. This thread is about ways we can solve it when not using conda.

About Me

I am the primary maintainer of RobotPy, which is an open source project that allows high school students to use Python to program and control their robots that they create for the FIRST Robotics Competition (FRC). Currently, FRC officially supports C++, Java, and LabVIEW as options for programming your robot. RobotPy has been an unofficial python alternative since 2011, but is expected to become an officially supported language option in 2024. There are 3000+ FRC teams, and last season around ~50 teams used RobotPy to program their robot directly, but many more used some of our libraries for interacting with the robot.

Motivation

From 2015-2019, RobotPy maintained a pure python port of the programming libraries needed to control an FRC robot. For a variety of reasons this was becoming untenable, and after considering many options I turned to pybind11 to wrap the existing C++ libraries.

The official C++/Java libraries live in a massive monorepo, with a handful of native libraries that depend on other native libraries. Additionally, there are vendors that provide advanced motor controllers and other sensors for use in FRC. These vendors publish binary-only releases of the libraries needed to interact with their motor controllers, so I needed something that could use native libraries that had dependencies on other native libraries.

For example: most of the vendor libraries depend on wpilib, which depends on hal and ntcore and wpimath, which depends on wpinet, which depends on wpiutil.

My goal was to make pip install robotpy Just Work. To solve all of this, I wrote robotpy-build, which parses C/C++ header files and semi-automatically generates pybind11-based wrapper libraries (and type stubs) that can be imported by python code. While a lot of what is does is very cool, that’s a whole separate topic. I will focus on a very narrow subset of what it does in this thread.

Challenge: make ‘import _myext’ work

Anyone who has tried to do this immediately runs into this problem: if my extension depends on a native library, how do I convince Python and/or the OS to find the correct library? If the library is installed to the system, this is easy enough – but if it lives inside another wheel in site-packages, the system loader isn’t going to look there automatically.

Often the naive solution to this is to modify the system path or LD_LIBRARY_PATH to force the system loader to find your library, but that solution doesn’t really feel right to me. Additionally, if the library you are trying to load exists on the system AND in a wheel somewhere there is potential for the system to load the wrong library.

There are approaches that work for all the major operating systems, but they vary slightly.

macOS

There is only one way to do it on macOS. The system loader insists that it must be able to find any referenced libraries, and will not resolve symbols that aren’t in a referenced library. However, there is a nice way to tell the loader to find a library relative to the library – @loader_path.

Since our wheels install to site-packages, we know where the libraries will be relative to our library, so we use delocate to modify where the libraries are loaded (here’s how robotpy-build does it).

Given this simplified site-packages for my ntcore package and its dependencies wpiutil and wpinet:

+- wpiutil
|  +- lib
|     +- libwpiutil.dylib
+- wpinet
|  +- lib
|     +- libwpinet.dylib
+- ntcore
   +- lib
   |  + libntcore.dylib
   +- _ntcore.cpython-311-darwin.so

Here’s the (simplified) output of otool -L for the modified libraries:

$ otool -L ntcore/lib/libntcore.dylib
ntcore/lib/libntcore.dylib:
        @loader_path/../../wpiutil/lib/libwpiutil.dylib
        @loader_path/../../wpinet/lib/libwpinet.dylib

$ otool -L wpinet/lib/libwpinet.dylib
wpinet/lib/libwpinet.dylib:
        @loader_path/../../wpiutil/lib/libwpiutil.dylib

$ otool -L ntcore/_ntcore.cpython-311-darwin.so
ntcore/_ntcore.cpython-311-darwin.so:
        @loader_path/../wpinet/lib/libwpinet.dylib
        @loader_path/../wpiutil/lib/libwpiutil.dylib
        @loader_path/lib/libntcore.dylib

With this setup, an import of ntcore._ntcore will just work in any standard CPython installation, virtualenv or not (unless you mix system + virtualenv + user site-packages… but don’t do that).

One caveat: to modify the install_name_path there has to be enough space in the binary for the modified name. You can pad the install_name_path to the max by compiling your native libraries with -Wl,-headerpad_max_install_names.

Windows

Windows doesn’t have a mechanism to tell the system loader to resolve libraries relative to a library, but it turns out that as long as the dependencies of a library are already loaded in the process then Windows will use those to resolve symbols and it Just Works. We can use ctypes.cdll.LoadLibrary() to manually load each needed library in the correct order, and when we finally do a import ntcore._ntcore it will load without any problems.

Linux

For Linux you can actually use either approach. You can modify the ELF to resolve libraries relative to the library (just like macOS), or you can take the approach we take for Windows and manually load each library in the correct order. My build tool takes this last approach, but either is fine.

How robotpy-build deals with native dependencies across wheels

There are several pieces that need to be solved:

  • At build time: finding all the pieces needed to compile + link
  • At run time: finding and loading native dependencies in the correct order before
    native python extensions are imported (see above discussion)

The FRC official libraries + headers are distributed in a maven repository, so we have a separate mechanism that downloads them and puts pieces in the right places for building a wheel. That won’t be discussed here – every project is going to obtain its native libraries in a different way, so below we assume that’s all figured out.

Build Time

At build time, we need something effectively like pkg-config, but one that works in the python ecosystem and only finds things that were installed by other wheels. The build system needs to find at least the following:

  • library names
  • link paths to library
  • associated include directories for header files
  • (pybind11 specific) type casters header files

To find these, I chose to use setuptools entry points, using the robotpybuild entry point. Each entry point has a name (used to define which native dependency) and a python package associated with it.

Let’s examine my pyntcore project, which both contains a library for others to use and also uses other libraries. Here’s the entry_points.txt in the installed *.dist-info:

[robotpybuild]
ntcore = ntcore.pkgcfg

This pkgcfg file is generated by the build system when it’s generating a wheel (I’m not going to discuss how the build system figures these things out since that’s very build system dependent, the important part is that it can figure it out and generate the pkgcfg.py), and is distributed with the wheel. At build time when resolving dependencies build system finds the associated entry point, and directly execs ntcore.pkgcfg (while being careful to NOT import its parent, which wouldn’t work when cross-compiling). Here’s that file on macOS:

# fmt: off
# This file is automatically generated, DO NOT EDIT

from os.path import abspath, join, dirname
_root = abspath(dirname(__file__))

libinit_import = "ntcore._init_ntcore"
depends = ['wpiutil', 'wpinet']
pypi_package = 'pyntcore'

def get_include_dirs():
    return [join(_root, "include"), join(_root, "rpy-include")]

def get_library_dirs():
    return [join(_root, "lib")]

def get_library_dirs_rel():
    return ['lib']

def get_library_names():
    return ['ntcore']

def get_library_full_names():
    return ['libntcore.dylib']

Most of this information’s purpose is obvious (and similar to what pkg-config provides), but I’d like to call attention to several specific pieces:

get_include_dirs and get_library_dirs retrieve the locations of libraries and include files. I chose to include them in the wheel in the package directory because other ‘standard’ locations (in particular, the headers argument for setuptools) didn’t seem to work work the way I would expect and sometimes would try installing to system locations, and IIRC didn’t work in editable installs (which is really important for my development setup because pybind11 takes FOREVER to compile for some of my template-heavy dependencies).

depends indicates other robotpy-build compatible native dependencies of this library, which can be looked up by finding the associated robotpybuild entry point and loading its pkgcfg file.

libinit_import specifies a file that MUST be imported before importing any other python package that tries to use the native dependency. This python file is responsible for loading any native libraries and its dependencies. This leads us very logically into the next section…

Runtime

When a user uses pyntcore, our goal is to make is so that they just need to import ntcore without needing to know all of the magic native dependency stuff that we discussed above. That package has an __init__.py that does a few things:

from . import _init_ntcore

from ._ntcore import (
    # Here we expose the symbols from the native extension, but
    # elided here for brevity
)

Because python always loads __init__.py first, the first part imports the libinit_import mentioned above, which ensures that any native dependencies for the compiled python extension _ntcore are loaded before it’s loaded.

Let’s look at ntcore/_init_ntcore.py on Linux:

# This file is automatically generated, DO NOT EDIT
# fmt: off

from os.path import abspath, join, dirname, exists
_root = abspath(dirname(__file__))

# runtime dependencies
import wpiutil._init_wpiutil
import wpinet._init_wpinet
from ctypes import cdll

try:
    _lib = cdll.LoadLibrary(join(_root, "lib", "libntcore.so"))
except FileNotFoundError:
    if not exists(join(_root, "lib", "libntcore.so")):
        raise FileNotFoundError("libntcore.so was not found on your system. Is this package correctly installed?")
    raise FileNotFoundError("libntcore.so could not be loaded. There is a missing dependency.")

This accomplishes the runtime loading of native dependencies that we discussed above that is needed for Windows and Linux. On macOS this isn’t strictly needed to resolve the native dependencies, but I keep it in there because it’s simpler and as a side effect it loads the python dependencies also which is needed for pybind11 to resolve types.

Once _init_XXX.py is imported, all native dependencies are loaded in process, and the import of _ntcore.cpython-311.so (or whatever it is on the platform) will succeed and can be used just like any other native python extension.

Cross-compilation

The robot controller we use runs Linux on ARM, so we cross-compile all of our packages. I use crossenv to do this, and as long as I don’t try to import anything directly from the native compiled libraries at build time this scheme works fine.

My proposal

… well, I don’t quite have one yet. I’ve been using this method for 3 years now and all the pieces I’ve described have been fairly static. However, if nobody is interested in this, then it’s not really worth taking the time and separating it from robotpy-build.

Final Thoughts

The robotpybuild pkgcfg entrypoint stuff probably would need to be very different for a standardized version of this:

  • Different name for the entry point (or maybe a better registration system?)
  • Originally for the pkgcfg file I used a pure python file that cannot depend on anything other than the standard library, but I think a standardized version of this should just be a JSON or TOML blob instead.
  • A standardized version of the pkgcfg thing probably needs compile flags and other things that pkg-config already provides… though I haven’t needed it, certainly some projects might.

Additionally, I am conscious that some of the things done here fly in the face of some ‘standard’ guidelines for python wheels (particularly with older versions of manylinux, and I’m sure it doesn’t pass auditwheel) – but it does work, and even a high school student can use the resulting wheels. Most of the issues teams have had when using RobotPy has been with my autogenerated code, and (almost) never with not being able to find library dependencies.

Want to see how this works in practice? There are a dozen or so RobotPy packages published to PyPI as wheels for macOS, Windows, and Linux for Python 3.7 - 3.11. Just pip install robotpy[all] and take a look.

I’m optimistic that we can leverage some of these ideas to make native dependencies work more easily in python. Thanks for reading!

8 Likes

Haven’t had a chance to parse the rest of the post yet, but wanted to say on this point that os.add_dll_directory has existed since 3.8 and allows you to specify an additional directory to search for DLL dependencies (not the PYD itself). It was added at the same time that PATH was removed from the default DLL search.

Also, you only need this if the DLL is not in the same directory as the PYD that is loading it, since that’s always preferred over any other path. Or as you point out, you can load the dependencies manually, as anything already loaded (based on the filename) will not be loaded again.

2 Likes

Yup, you’re absolutely right. At the time I needed to support 3.6/3.7 so it wasn’t available.

Of course, it suffers from the same problem that setting LD_LIBRARY_PATH does, namely that you’re not guaranteed that the system will load a particular DLL (or a particular dependent DLL) if a different version of it happens to exist on the system somewhere, so loading everything manually seems like a better approach for this specific set of problems as the build system will be able to set this up properly.

I’m trying to figure out what you are doing and why its hard.

Is it that you know that wheel xyz will install a .dylib (.dll or .so) for libfoo and you want
to avoid building and shipping libfoo with your wheel?

Then when you run your C/C++ extension that depends on libfoo you do not know how to make that work?

I thought that you would have a loader spec in your extension that can reference the libfoo where it was installed.

Of course you need the header files and library to be able to link your extension.

Yes, that. If another package already contains libfoo, why would I ship it with my wheel when I can just use libfoo?

A more concrete example perhaps:

Lots of people write things for OpenCV. There is a package on pypi called opencv-python that installs the python bindings for OpenCV. Let’s say that I have some C++ code that uses OpenCV that I want to use.

If opencv-python shipped with OpenCV libraries in it (though, last time I checked, they just statically linked to the underlying opencv libraries), currently, there is no standard way for me to tell a python-centric build system “just link to the opencv library provided by opencv-python”. Instead, I either have to use some magic API that opencv-python would provide (which of course might be different from how other packages do it) to find its libraries and headers, or I have to link to the system install of opencv (and use the systems opencv python bindings).

It would be good if we could have a standard way to have wheels that could have native dependencies that are in other wheels and not installed by the system.

Okay, this wasn’t clear at all in your original post :slight_smile:

This is a very challenging problem, but basically comes down to not being able to guarantee that the ABI (or even the API) matches. A lot of these details are negotiated at compile time by the C compiler, so you end up having to lock the version of the dependency at compile time and then ensure that you get exactly the same version later on. With two (or more) independently released packages, this is near impossible.

A properly defined C library can do it, but again, most are not designed to handle this.

The best way to do it would be to write a thin (or thick!) Python wrapper around the C library, distribute that with the C library included, and then encourage everyone who needs that C library to use your Python API instead. Many won’t like it, because they lose native performance, but that’s ultimately the tradeoff.

Fair enough. :slight_smile:

So, you’re right. But also, this slightly misses the point I think.

Certainly ABI/API issues are problematic to deal with. This is especially challenging for C++ libraries – I recently ran into an issue where my native compiler was using a different version of my cross-compiler and there was a mismatch in the layout for std::span.

But arguably, right now we can’t really encounter these API/ABI issues because this sort of linking isn’t really even possible to do generically right now.

I think if we were able to do it easier, then there would be more motivation to figure out ways to solve these other issues also.

That only works well if that library doesn’t have dependencies. Then you have to bundle that, but if you have something else that also depends on it…

1 Like

I never said it was a good way, only that it’s the best way :wink:

I try to take this same position, but packaging has mostly burnt it out of me :frowning: I don’t want to be discouraging, but ultimately it seems like discussions go nowhere and it requires a popular implementation. Ignore the people who say it can’t be done, do it anyway, and when it seems inevitable then your idea might get some traction.

1 Like

Sorry to hear that.

Well the good news (for me anyways) is that I do have a solution for all of this, and it works for me, and I’m actively using it. Maybe someone else will care about it too.

Or… would it be a viable idea to write no wrapper at all, and distribute a wheel containing only the C library as a data file? You need to jump through some hoops to name (when packaging) and locate (when depending) on the libs, perhaps taking some inspiration or implementation from auditwheel etc., but I think this is technically doable?

I wrote up some notes on how to do this years ago, but stalled out do to lack of time + folks not seeming too interesting: wheel-builders/pynativelib-proposal.rst at pynativelib-proposal · njsmith/wheel-builders · GitHub
It’s exciting to see interest again!

The challenging part is making wheels that support arbitrary Python environment layouts. Your macOS solution works for the case where all the packages are installed into the same site-packages directory, and that should work… a lot of the time? But normally if package A depends on package B then all you need is for A and B to both be findable in sys.path – requiring that they both be installed into the same directory is an extra requirement that normal packages don’t need, and can break down when dealing with things like in-place installs, or environments that are assembled on the fly by just tweaking environment variables instead of copying files around (like I’ve been experimenting with).

However, this is solvable. It’s just frustratingly complicated.

For Windows and Linux: as you noted, if a shared library declares a dependency on some-lib, and a shared library named some-lib has been loaded into the process, then the dynamic loader will automatically use that some-lib to resolve the dependency. You do want to make sure that your libraries all have unique names, to avoid collisions with potentially-incompatible binaries from other sources – so e.g. you might to rename the version of libntcore to pynativelib-libntcore, in case the user has another copy of libntcore floating around under its regular name. Fortunately, patchelf can do this renaming on Linux, and machomachomangler can do it on Windows.

This part is relatively routine – the library vendoring tools auditwheel for Linux and delvewheel for Windows do the same renaming trick for similar reasons. That’s why I wrote the Windows renaming code in machomachomangler in the first place :slight_smile:

For macOS: as you noted, it is absolutely impossible to convince the macOS dynamic loader to look up a library by anything other than an absolute or relative path. Super annoying. However! There is a trick. Explaining it requires a brief digression into obscure corners of Mach-O.

Say you’re building a library that uses cv::namedWindow from opencv. What normally happens is:

  1. Your compiler “mangles” (this is a technical term) the C++ name into a simple string, probably something ugly like _ZN2cv10namedWindowEv.
  2. It looks around at the libraries you have to link to, and finds that this symbol is exported by libopencv.dylib
  3. It makes a note in your binary that when it wants to call cv::namedWindow, it should do that by first finding libopencv.dylib, and then looking inside it for _ZN2cv10namedWindowEv.

macOS calls this procedure the “two-level namespace”, because your binary stores two different pieces of information: the library to look in (as a filesystem path!), and the symbol to look for inside that library.

But! macOS also supports an alternative lookup procedure, the “single-level/flat namespace”. When using this, your compiler just writes down that it wants a symbol named _ZN2cv10namedWindowEv, without any information about where it should come from. And then when your binary gets loaded, the loader looks around through all the libraries that are loaded into the process for a symbol with that name. Now that pesky filesystem lookup is gone!

So all we need to do is:

  • make sure that our wheel-wrapped libopencv.dylib is already loaded before loading our library
  • …somehow make sure that every single symbol that our wheel-wrapped libopencv.dylib exports has a globally unique name, so it can’t accidentally collide with some other random binary, and that our library looks up symbols by those globally unique name.

Unfortunately, our libraries probably don’t have globally unique symbol names; that’s the whole reason why macOS added the two-level namespace stuff.

But, we can fix this! All we need to do is go through the symbol tables stored inside libopencv.dylib and inside our binary, and rewrite all the symbols to make them unique, so e.g. now libopencv.dylib exports _our_special_uniquified__ZN2cv10namedWindowEv, and that’s what our binary calls, and we’re good.

Of course rewriting symbol tables is pretty complicated, especially since macOS stores symbol tables in this weird compressed format where instead of a table, it’s actually a program in a special bytecode language that when executed outputs the symbol table, so you need to evaluate the program and then generate a new one? It’s kinda wild tbh.

Luckily, machomachomangler has code to do exactly that. Well, it’s not really luck; this is why I wrote it :slight_smile: It’s been like 5 years since I looked at this last and I don’t think anyone has used that code in production, so it might need some tweaks to bring it up to speed, but that shouldn’t be too hard as long as someone is interested in making it happen.

Nice – if you read the pynativelib proposal I linked at the top, then I think we converged on a pretty similar design. The big thing I’d suggest adding is metadata for “if you build your wheel against this version of this package, then here are the install-requirements that you should add to your final wheel’s metadata, to make sure that an appropriate version of this package is available”. So e.g. building against ntcore 1.2.7 might suggest a runtime requirement of ntcore == 1.2.*, >= 1.2.7, or ntcore == 1.*, >=1.2.7, or even ntcore == 1.2.7 – whichever one matches ntcore’s ABI compatibility guarantees, which the ntcore distributors will understand better than their users.

This is doable – we already have package names and version constraints and all that to solve these exact problems for Python libraries :slight_smile: We just need to figure out how to encode the C compatibility constraints using those tools.

(It’s even possible to support e.g. two incompatible versions of opencv. Just name the python packages opencv-v1 and opencv-v2, and make sure the binary mangling uses different names, so wheels depending on opencv-v1 get the v1 symbols and wheels depending on opencv-v2 get the v2 symbols.)

4 Likes

This is a very nice writeup of the motivation and magic that is behind machomachomangler. Quickly scanning the PyPI page you linked to, and the pynativelib-proposal, I don’t see this clear explanation of the solution used for macOS. Did I miss it? I do see " I promise it will all make sense once I have a chance to write it up properly…". Maybe that comment could be replaced with a link to this description?

Provided they never interact with each other, at which point they might as well have brought their own copy of the library (or statically linked it) and completely and trivially avoid conflicts with other packages.

If you have a library that opens an image, then another one that performs operations on it, those almost certainly are going to need the same version of the library (unless you’ve got what I referred to above as a well-designed library).

And I’d argue that what we currently have for Python libraries doesn’t solve these exact problems, or we wouldn’t be having the other discussion about how to solve these problems :slight_smile: The way to “solve” it with what we currently have is for libraries to pin the version of the dependency they need, thereby almost certainly causing conflicts with any other library following the same advice, and so making the whole thing unusable.

I agree there are many circumstances we can hold our noses and make something good enough to usually work, but we also know that just pushes the edge cases further out and makes them harder to discover and resolve.

I’ve toyed with the idea of adding a libraries directory to Python, but at the time I was not sure how to implement it in a viable way. The idea would be that the interpreter would include the libraries directory in its library search list. If we can figure out a way to implement that in a portable way, I think it’d be viable.

Several platforms, Linux included, have a shared memory space, so if two different packages try to import two different versions of the same library, you’ll likely have issues. I think the solution to this is standardize a way to distribute these libraries, so that we can prevent two versions of the same library from being installed in the first place.

Sure, and that’s what we do now. But vendoring everything like this has two major downsides:

  • Large foundational libraries end up getting duplicated lots of times, e.g. every numerics-related project ends up shipping its own copy of BLAS/LAPACK because they all need basic matrix operations[1]. This wastes space (Intel’s version of these libraries is >100 MiB for each copy!) and can be inefficient at runtime (e.g. if they all create separate threadpools)

  • Keeping all those copies up to date is a hassle. For example, OpenSSL gets shipped in lots of different projects’ wheels right now (e.g. psycopg2 wheels have their own copy, because it’s a wrapper for libpq2, and libpq2 uses OpenSSL). So every time OpenSSL fixes a CVE, all these packages need to re-roll new binary wheels, and everyone has to upgrade. In practice this doesn’t happen, so people just keep using a mixture of old versions of OpenSSL. It’d be way easier if there was just one project distributing OpenSSL and pip install --upgrade openssl-but-wrapped-in-a-wheel would fix them all at once.

So those are problems that sharing libraries between wheels would solve.

It’s true it doesn’t magically let you link together libraries with incompatible ABIs, but it doesn’t have to to be useful :slight_smile:

Unfortunately, macOS just doesn’t support this. There is no concept of “library search path” at all. Very frustrating.

Fortunately, all the major platforms have ways to namespace/isolate symbols, so this is avoidable. It just requires becoming way too familiar with arcane details of dynamic loaders…


  1. Well, technically, for this specific case, scipy has some clever machinery to export a table of function pointers for BLAS/LAPACK inside a PyCapsule, so other C code can use Python to find the function pointers but then switch to using them through C. And Cython has some special support to make this more ergonomic. But doing this for every library in the world is a non-starter. ↩︎

2 Likes

I thought you can do this with otool and setting a rpath as discussed here: command line - Print rpath of an executable on macOS - Stack Overflow

Oh, right, I forgot about @rpath :slight_smile: Yeah, an executable can have a list of directories on the rpath, and you can have a library that’s loaded from “any directory on rpath”. But the list of directories is baked into the executable binary itself – there’s no way to change it at runtime, or collect all the libraries directories from sys.path, or anything like that.

1 Like

I agree, but you’re doing a lot of heavy lifting with this sentence.

OpenSSL is one of the few libraries that could be updated this way, because they’re very careful about their API. (I’d never want to ship CPython this way, because we’re not, by comparison :wink: ) We’d still end up with parallel installs of 1.0, 1.1, probably 1.1.1 and 3.0, since anything compiled against one of those won’t be able to magically switch to another. But as long as OpenSSL takes care of their ABI, it’s fine.

Now extend this to libraries in general and… well… you can’t. Unless the library devs are deliberately maintaining compatibility, which is my special category of “well behaved”, and all the consumers are equally well behaved (e.g. libpq2 probably doesn’t have runtime linking options for OpenSSL, or it would be able to pick up the same copy that _ssl uses), at which point sure. When you’re building a library deliberately for this, it can be done.

Most libraries aren’t. There’s a reason we fork all the sources for the libraries CPython depends on and use our own (sometimes patched) builds[1] - because everyone being “well behaved” just isn’t a reality. And putting copies of their libraries into a wheel doesn’t make them any better behaved, unless the original devs are doing it voluntarily and know what they’re signing up for.[2]

About the only way we could make this work is to define a standard environment for packages to build against. If having cp312 in your wheel tag implied that you’d always have OpenSSL 1.1.1<letter>, then people wouldn’t have to bring their own copy. But we’ve already decided not to make those kinds of guarantees, and so it’s left to lower-level tools than CPython/pip to define the environment, and for higher level libraries to decide whether to inform users what dependencies they expect (e.g. things listed in the manylinux spec), or to just bring their own copy and not worry about the environment at all (e.g. things not listed in the manylinux spec).


  1. Including OpenSSL ↩︎

  2. Which I’d love, don’t get me wrong. It’s just unrealistic. ↩︎

1 Like

Clearly you’ve thought a lot more about the edge cases than I have, but this is all really cool. :slight_smile: Given it’s age and the fact that it feels familiar, I feel like I must have read it at some point when I was trying to solve this problem.

Certainly what I’ve done is effectively a naive version of pynativelib – and it works great for my constrained use case (all libraries are built by effectively the same process using the same compilers and everything is released at around the same time). Since what I’ve done works, it does feel like a proof of concept that your more full proposal should also work if one took the time to do it – and honestly, if it weren’t for OSX most of this would be way easier.

This is how golang encourages modules to work, and while it’s annoying it feels like it’s probably what one should encourage packages to do.

I think what njs was proposing in pynativebuild would actually provide a way to solve this problem. In particular, if the build system were smart enough to:

  • provide a way to link to already mangled libraries (maybe it does the inverse mangle, maybe it just works)
  • mangle the native libraries after build in a unique way that encodes compatibility information in them

Then it’s totally workable – and if nothing else, it’s not much worse the existing compatibility issues that you run into with existing python packages.

Well, there are the macOS DYLD_ env variables (DYLD_LIBRARY_PATH et al) that allow you to change library search paths at run time. Not suitable for all cases but they can be useful. (man 1 dyld)