Native dependencies in other wheels -- how I do it, but maybe we can standardize something?

Okay, this wasn’t clear at all in your original post :slight_smile:

This is a very challenging problem, but basically comes down to not being able to guarantee that the ABI (or even the API) matches. A lot of these details are negotiated at compile time by the C compiler, so you end up having to lock the version of the dependency at compile time and then ensure that you get exactly the same version later on. With two (or more) independently released packages, this is near impossible.

A properly defined C library can do it, but again, most are not designed to handle this.

The best way to do it would be to write a thin (or thick!) Python wrapper around the C library, distribute that with the C library included, and then encourage everyone who needs that C library to use your Python API instead. Many won’t like it, because they lose native performance, but that’s ultimately the tradeoff.

Fair enough. :slight_smile:

So, you’re right. But also, this slightly misses the point I think.

Certainly ABI/API issues are problematic to deal with. This is especially challenging for C++ libraries – I recently ran into an issue where my native compiler was using a different version of my cross-compiler and there was a mismatch in the layout for std::span.

But arguably, right now we can’t really encounter these API/ABI issues because this sort of linking isn’t really even possible to do generically right now.

I think if we were able to do it easier, then there would be more motivation to figure out ways to solve these other issues also.

That only works well if that library doesn’t have dependencies. Then you have to bundle that, but if you have something else that also depends on it…

1 Like

I never said it was a good way, only that it’s the best way :wink:

I try to take this same position, but packaging has mostly burnt it out of me :frowning: I don’t want to be discouraging, but ultimately it seems like discussions go nowhere and it requires a popular implementation. Ignore the people who say it can’t be done, do it anyway, and when it seems inevitable then your idea might get some traction.

Sorry to hear that.

Well the good news (for me anyways) is that I do have a solution for all of this, and it works for me, and I’m actively using it. Maybe someone else will care about it too.

Or… would it be a viable idea to write no wrapper at all, and distribute a wheel containing only the C library as a data file? You need to jump through some hoops to name (when packaging) and locate (when depending) on the libs, perhaps taking some inspiration or implementation from auditwheel etc., but I think this is technically doable?

I wrote up some notes on how to do this years ago, but stalled out do to lack of time + folks not seeming too interesting: wheel-builders/pynativelib-proposal.rst at pynativelib-proposal · njsmith/wheel-builders · GitHub
It’s exciting to see interest again!

The challenging part is making wheels that support arbitrary Python environment layouts. Your macOS solution works for the case where all the packages are installed into the same site-packages directory, and that should work… a lot of the time? But normally if package A depends on package B then all you need is for A and B to both be findable in sys.path – requiring that they both be installed into the same directory is an extra requirement that normal packages don’t need, and can break down when dealing with things like in-place installs, or environments that are assembled on the fly by just tweaking environment variables instead of copying files around (like I’ve been experimenting with).

However, this is solvable. It’s just frustratingly complicated.

For Windows and Linux: as you noted, if a shared library declares a dependency on some-lib, and a shared library named some-lib has been loaded into the process, then the dynamic loader will automatically use that some-lib to resolve the dependency. You do want to make sure that your libraries all have unique names, to avoid collisions with potentially-incompatible binaries from other sources – so e.g. you might to rename the version of libntcore to pynativelib-libntcore, in case the user has another copy of libntcore floating around under its regular name. Fortunately, patchelf can do this renaming on Linux, and machomachomangler can do it on Windows.

This part is relatively routine – the library vendoring tools auditwheel for Linux and delvewheel for Windows do the same renaming trick for similar reasons. That’s why I wrote the Windows renaming code in machomachomangler in the first place :slight_smile:

For macOS: as you noted, it is absolutely impossible to convince the macOS dynamic loader to look up a library by anything other than an absolute or relative path. Super annoying. However! There is a trick. Explaining it requires a brief digression into obscure corners of Mach-O.

Say you’re building a library that uses cv::namedWindow from opencv. What normally happens is:

  1. Your compiler “mangles” (this is a technical term) the C++ name into a simple string, probably something ugly like _ZN2cv10namedWindowEv.
  2. It looks around at the libraries you have to link to, and finds that this symbol is exported by libopencv.dylib
  3. It makes a note in your binary that when it wants to call cv::namedWindow, it should do that by first finding libopencv.dylib, and then looking inside it for _ZN2cv10namedWindowEv.

macOS calls this procedure the “two-level namespace”, because your binary stores two different pieces of information: the library to look in (as a filesystem path!), and the symbol to look for inside that library.

But! macOS also supports an alternative lookup procedure, the “single-level/flat namespace”. When using this, your compiler just writes down that it wants a symbol named _ZN2cv10namedWindowEv, without any information about where it should come from. And then when your binary gets loaded, the loader looks around through all the libraries that are loaded into the process for a symbol with that name. Now that pesky filesystem lookup is gone!

So all we need to do is:

  • make sure that our wheel-wrapped libopencv.dylib is already loaded before loading our library
  • …somehow make sure that every single symbol that our wheel-wrapped libopencv.dylib exports has a globally unique name, so it can’t accidentally collide with some other random binary, and that our library looks up symbols by those globally unique name.

Unfortunately, our libraries probably don’t have globally unique symbol names; that’s the whole reason why macOS added the two-level namespace stuff.

But, we can fix this! All we need to do is go through the symbol tables stored inside libopencv.dylib and inside our binary, and rewrite all the symbols to make them unique, so e.g. now libopencv.dylib exports _our_special_uniquified__ZN2cv10namedWindowEv, and that’s what our binary calls, and we’re good.

Of course rewriting symbol tables is pretty complicated, especially since macOS stores symbol tables in this weird compressed format where instead of a table, it’s actually a program in a special bytecode language that when executed outputs the symbol table, so you need to evaluate the program and then generate a new one? It’s kinda wild tbh.

Luckily, machomachomangler has code to do exactly that. Well, it’s not really luck; this is why I wrote it :slight_smile: It’s been like 5 years since I looked at this last and I don’t think anyone has used that code in production, so it might need some tweaks to bring it up to speed, but that shouldn’t be too hard as long as someone is interested in making it happen.

Nice – if you read the pynativelib proposal I linked at the top, then I think we converged on a pretty similar design. The big thing I’d suggest adding is metadata for “if you build your wheel against this version of this package, then here are the install-requirements that you should add to your final wheel’s metadata, to make sure that an appropriate version of this package is available”. So e.g. building against ntcore 1.2.7 might suggest a runtime requirement of ntcore == 1.2.*, >= 1.2.7, or ntcore == 1.*, >=1.2.7, or even ntcore == 1.2.7 – whichever one matches ntcore’s ABI compatibility guarantees, which the ntcore distributors will understand better than their users.

This is doable – we already have package names and version constraints and all that to solve these exact problems for Python libraries :slight_smile: We just need to figure out how to encode the C compatibility constraints using those tools.

(It’s even possible to support e.g. two incompatible versions of opencv. Just name the python packages opencv-v1 and opencv-v2, and make sure the binary mangling uses different names, so wheels depending on opencv-v1 get the v1 symbols and wheels depending on opencv-v2 get the v2 symbols.)

3 Likes

This is a very nice writeup of the motivation and magic that is behind machomachomangler. Quickly scanning the PyPI page you linked to, and the pynativelib-proposal, I don’t see this clear explanation of the solution used for macOS. Did I miss it? I do see " I promise it will all make sense once I have a chance to write it up properly…". Maybe that comment could be replaced with a link to this description?

Provided they never interact with each other, at which point they might as well have brought their own copy of the library (or statically linked it) and completely and trivially avoid conflicts with other packages.

If you have a library that opens an image, then another one that performs operations on it, those almost certainly are going to need the same version of the library (unless you’ve got what I referred to above as a well-designed library).

And I’d argue that what we currently have for Python libraries doesn’t solve these exact problems, or we wouldn’t be having the other discussion about how to solve these problems :slight_smile: The way to “solve” it with what we currently have is for libraries to pin the version of the dependency they need, thereby almost certainly causing conflicts with any other library following the same advice, and so making the whole thing unusable.

I agree there are many circumstances we can hold our noses and make something good enough to usually work, but we also know that just pushes the edge cases further out and makes them harder to discover and resolve.

I’ve toyed with the idea of adding a libraries directory to Python, but at the time I was not sure how to implement it in a viable way. The idea would be that the interpreter would include the libraries directory in its library search list. If we can figure out a way to implement that in a portable way, I think it’d be viable.

Several platforms, Linux included, have a shared memory space, so if two different packages try to import two different versions of the same library, you’ll likely have issues. I think the solution to this is standardize a way to distribute these libraries, so that we can prevent two versions of the same library from being installed in the first place.

Sure, and that’s what we do now. But vendoring everything like this has two major downsides:

  • Large foundational libraries end up getting duplicated lots of times, e.g. every numerics-related project ends up shipping its own copy of BLAS/LAPACK because they all need basic matrix operations[1]. This wastes space (Intel’s version of these libraries is >100 MiB for each copy!) and can be inefficient at runtime (e.g. if they all create separate threadpools)

  • Keeping all those copies up to date is a hassle. For example, OpenSSL gets shipped in lots of different projects’ wheels right now (e.g. psycopg2 wheels have their own copy, because it’s a wrapper for libpq2, and libpq2 uses OpenSSL). So every time OpenSSL fixes a CVE, all these packages need to re-roll new binary wheels, and everyone has to upgrade. In practice this doesn’t happen, so people just keep using a mixture of old versions of OpenSSL. It’d be way easier if there was just one project distributing OpenSSL and pip install --upgrade openssl-but-wrapped-in-a-wheel would fix them all at once.

So those are problems that sharing libraries between wheels would solve.

It’s true it doesn’t magically let you link together libraries with incompatible ABIs, but it doesn’t have to to be useful :slight_smile:

Unfortunately, macOS just doesn’t support this. There is no concept of “library search path” at all. Very frustrating.

Fortunately, all the major platforms have ways to namespace/isolate symbols, so this is avoidable. It just requires becoming way too familiar with arcane details of dynamic loaders…


  1. Well, technically, for this specific case, scipy has some clever machinery to export a table of function pointers for BLAS/LAPACK inside a PyCapsule, so other C code can use Python to find the function pointers but then switch to using them through C. And Cython has some special support to make this more ergonomic. But doing this for every library in the world is a non-starter. ↩︎

2 Likes

I thought you can do this with otool and setting a rpath as discussed here: command line - Print rpath of an executable on macOS - Stack Overflow

Oh, right, I forgot about @rpath :slight_smile: Yeah, an executable can have a list of directories on the rpath, and you can have a library that’s loaded from “any directory on rpath”. But the list of directories is baked into the executable binary itself – there’s no way to change it at runtime, or collect all the libraries directories from sys.path, or anything like that.

1 Like

I agree, but you’re doing a lot of heavy lifting with this sentence.

OpenSSL is one of the few libraries that could be updated this way, because they’re very careful about their API. (I’d never want to ship CPython this way, because we’re not, by comparison :wink: ) We’d still end up with parallel installs of 1.0, 1.1, probably 1.1.1 and 3.0, since anything compiled against one of those won’t be able to magically switch to another. But as long as OpenSSL takes care of their ABI, it’s fine.

Now extend this to libraries in general and… well… you can’t. Unless the library devs are deliberately maintaining compatibility, which is my special category of “well behaved”, and all the consumers are equally well behaved (e.g. libpq2 probably doesn’t have runtime linking options for OpenSSL, or it would be able to pick up the same copy that _ssl uses), at which point sure. When you’re building a library deliberately for this, it can be done.

Most libraries aren’t. There’s a reason we fork all the sources for the libraries CPython depends on and use our own (sometimes patched) builds[1] - because everyone being “well behaved” just isn’t a reality. And putting copies of their libraries into a wheel doesn’t make them any better behaved, unless the original devs are doing it voluntarily and know what they’re signing up for.[2]

About the only way we could make this work is to define a standard environment for packages to build against. If having cp312 in your wheel tag implied that you’d always have OpenSSL 1.1.1<letter>, then people wouldn’t have to bring their own copy. But we’ve already decided not to make those kinds of guarantees, and so it’s left to lower-level tools than CPython/pip to define the environment, and for higher level libraries to decide whether to inform users what dependencies they expect (e.g. things listed in the manylinux spec), or to just bring their own copy and not worry about the environment at all (e.g. things not listed in the manylinux spec).


  1. Including OpenSSL ↩︎

  2. Which I’d love, don’t get me wrong. It’s just unrealistic. ↩︎

1 Like

Clearly you’ve thought a lot more about the edge cases than I have, but this is all really cool. :slight_smile: Given it’s age and the fact that it feels familiar, I feel like I must have read it at some point when I was trying to solve this problem.

Certainly what I’ve done is effectively a naive version of pynativelib – and it works great for my constrained use case (all libraries are built by effectively the same process using the same compilers and everything is released at around the same time). Since what I’ve done works, it does feel like a proof of concept that your more full proposal should also work if one took the time to do it – and honestly, if it weren’t for OSX most of this would be way easier.

This is how golang encourages modules to work, and while it’s annoying it feels like it’s probably what one should encourage packages to do.

I think what njs was proposing in pynativebuild would actually provide a way to solve this problem. In particular, if the build system were smart enough to:

  • provide a way to link to already mangled libraries (maybe it does the inverse mangle, maybe it just works)
  • mangle the native libraries after build in a unique way that encodes compatibility information in them

Then it’s totally workable – and if nothing else, it’s not much worse the existing compatibility issues that you run into with existing python packages.

Well, there are the macOS DYLD_ env variables (DYLD_LIBRARY_PATH et al) that allow you to change library search paths at run time. Not suitable for all cases but they can be useful. (man 1 dyld)

Thanks for working on this @virtuald. It’d be quite useful to document the recommended design pattern here better and make it easier to implement. I know NumPy and SciPy plan to put OpenBLAS in a separate wheel and rely on that (only for their use, not for anyone else’s), and the PyArrow folks are similarly planning to split up their one large wheel into multiple ones (e.g., put libarrow in a separate wheel). It’s still nontrivial to get this right across OSes, as discussed in the posts above.

I do have to caution that this is only a reasonable idea if you control all the separate pieces. That is the case for RobotPy it sounds like, and also for NumPy/SciPy and PyArrow. The OpenSSL story is a bit different. The original proposal here and pynativelib don’t really have a good upgrade path - given how PyPI and wheels work, it is not a healthy idea to go all-in on separate wheels with non-Python C/C++/etc. libraries. For this to work, you need a way to roll out upgrades when you need to make backwards-incompatible changes, switch compilers, etc. Linux distros, Conda, Homebrew & co all have that, and that kind of setup is the right one for broader use of non-vendored libraries. I wrote about this in a fair amount of detail in pypackaging-native, see for example PyPI’s author-led social model and its limitations and https://pypackaging-native.github.io/meta-topics/no_build_farm/.

3 Likes

I think you’re right, but I’m glad to see people exploring solutions like this for cases where it can work. It’s very clear that there are a lot of potential problems that might arise, and the failure modes (e.g., segfaults) are not great. But if we accept that it’s not a perfect solution, IMO a “good enough” answer will be useful for a lot of people.

Wheels themselves were designed as a “good enough” solution, and we got a long way before their limitations started to be a problem. Of course you could argue that we’re now suffering from not having looked closely enough at the risks - but I feel that this is a case where “Now is better than never” applies (and IMO we’ve spent long enough trying to find ways forward that we’ve satisfied the demands of “Although never is often better than right now” :slightly_smiling_face:)

I may be missing something, and maybe this isn’t directly an answer for this situation, but it seems like this is precisely the case addressed by conda? In conda binary dependencies like that become separate packages and then everything calls them out explicitly.

1 Like

I believe I addressed this in the OP under “Sidebar: Why not conda?”

Note: I wrote this some time ago, but forgot to post it – turns out that this is now a good time :slight_smile:

I just want to comment on a this:

  1. While conda is widely used by data scientists, there is nothing about it that is specific to that use case. If there are not conda packages available for your types of projects, that’s because no one has bothered to build them – not because conda is somehow not suited to other fields.

And with the advent of conda-forge – it’s actually pretty easy to make packages available for the community to use.

Like any new system, conda had some growing pains – it’s a lot better now. But I would note that those of us that use conda do so because:

“issues they had with using pip, and the few times I had to interact with pip I didn’t have a particularly good experience”
:slight_smile:

  • My, and many others’ experience is that conda is massively easier when you have to deal with non-pure-python dependencies. [*] – which is exactly what this thread is about.

Maybe conda is poorly designed or implemented, but I think that the challenges of conda are because it is trying to solve a very hard problem. If pip+wheel, etc is expanded to address those same problems, it will have the same difficulties.[**]

If the community decides it wants pip+wheel to solve these issues – great – but they are not easy problems, and you will find that you are reimplementing much of conda. (at the very least, learn from it – most of the issue I see being discussed in this thread have been solved in conda (maybe badly, but at least look)

Don’t forget that conda was developed precisely because the PyPA (well, I don’t think it existed then, but the Python packaging community anyway) specifically said that they were not interested in solving those problems.

It’s come a long way since then, but the challenges are still there, as you can see.

[*] Note: the non-pure python dependency thing is a big deal for users, but an even bigger deal for package developers – for the most part, the pip+wheel solution is to “vendor” all the libs needed for a package

[**] Note2: IN my experience when conda does not work well for someone, it is caused by one of three reasons:

  1. The packages they need are not built for conda:
    • this is much smaller deal than it used to be, because of conda forge, and because adding a few packages with pip to a conda environment works pretty well. Unless it’s not easily pip-installed anyway, but pip doesn’t solve that.

1b) They don’t know about conda-forge :slight_smile:

  1. They are a bit confused about what conda is and how it interacts with pip, virtualenv, etc. – so try to build a virtualenv within a conda env; use pip to install / upgrade packages that they should install with conda (even pip itself).
    • This is a tough one, but it’s about education – one of the main sources of problems is that tutorials, etc often start with "make a venv… without any explanation of why, or whether you need to, or … I have literally had students that thought making a venv was something specifically Flask needed to run…
5 Likes