Installation of shared objects into separate folder

I’m trying to come up with a that would allow me to install (deploy) shared objects that my package needs into a separate folder under venv/lib/python3.?/site-packages/. I would compile those shared objects from the the C++ code I have, and the package I’m working on is utilizing cython that would then link with those shared objects.

For example if package is called mypkg, I would like to have support shared libraries in mypkg.libs:



I can see this is the approach numpy uses to deploy some supporting libraries. Their build/deploy code looks quite complex to me and I’m not seeing it how they do it.

Has anyone done anything similar?

This is the kind of thing that should go into a wheel’s .data directory (you’d put it into mypkg-{version}.data/platlib/mypkg.libs). I don’t know how to declare it in though, maybe someone else does.

If I understand OP correctly what perse wants is something similar to auditwheel (for GNU/Linux) or delocate (for macOS). AFAICT though there’s yet to be a generic solution for Windows.

While it’s indeed not supported by setuptools (, alternative build backends wrapping around e.g. CMake or Meson may handle this well, but I haven’t been keeping track of those lately.

There is one:

Never used it myself though so YMMV

1 Like

Thanks for the input guys!
I’ve been experimenting with this content:

module1 = Extension('mypkg.pkg',

module2 = Extension('mypkg_data._mylib',

    ext_modules = [module1, module2],

After installing with python install --old-and-unmanageable I somehow get the desired results in the venv/lib/python3.9/site-packages folder:


Note that the mypkg/ would contain more pure python sources, while mypkg_data/ would not.
Since Extension name (first argument) treats the “.” as as package/module separator I have not find the way yet to get to the

Since Extension name (first argument) treats the “.” as as package/module separator I have not find the way yet to get to the .

If I create the folder manually, after build, move the into it, it gets installed with no complaints. That’s a crude manual hack, I know. I would need to extend / override the installation step to see it happen, I guess.

So I missed understood what you meant by shared object. FYI both mypkg.pkg and mypkg_data._mylib are extension modules. I am curious if there is anything preventing to work though.

Back to your original goal of

link with those shared objects

please see the extra_* parameters in Extension (API, guide).

I am curious if there is anything preventing to work though.

No complaints, but the result ( is placed into folder.

please see the extra_* parameters in Extension

Right, this will be required as well when building this package!

I also need the same to be available to the “mypkg2”, that would depend on the while also doing import mypkg.

I managed to get my shared library deployed into a folder of choice by defining an extra Extension and overloading build_ext() to my liking. Took a while to get all three major OSes to work, but it is possible. As a side note, defining an extra Extension to compile a native library might be an overkill and also makes the setup process think it needs to deploy it as a package (on python install). I’m currently working on having just bare bones CCompiler calls to produce the native library.

Nevertheless, I’m hitting another showstopper on OSX that has to do with the location in which the python install places the This becomes particularly important when the mypkg2 needs to get a hold of the native library in OSX is quite strict on where it looks for libraries it needs to dynamically load. For example, if I install with python install the resulting folders are under *.egg folder and this breaks my mypkg2 import. If the python install --old-and-unmanageable is ran, *.egg is not created, and this works for mypkg2 import. Interestingly this is not an issue on Linux and Windows.

I’m trying to come up with a better location for these native libraries (ones in *.data folders) I want to deploy. As far as I experienced, this is not an issue when a python package is the only user of the native library it provides, but as soon as other packages might need to use that native library it may run into trouble locating it (rpath, dll hell) at package import time.

Would it be customary to use some other folder under ./venv/lib or ./venv/lib/pythonX.Y to hold native libraries that python packages can use?

Do not use install. Use pip install . instead.

Ultimately, if you have a complex environment with multiple packages sharing binaries, you’ll need to configure it more manually than wheels can really support. What you need here is a way to share these binaries between the builds of the packages, so that they all have similar expectations. It’s close to impossible to assemble an environment like this from a distributed set of pre-built packages (and if you manage, it’s going to be very difficult to update any part of it).

If your aim is to distribute a wheel, the best you can do is put the copies of your binaries directly alongside the extension modules that use them. This should be supported on all platforms for your package in isolation.

When you know that other packages are likely to exist in user environments that will conflict with yours, the best you can do is reach out to those packages and get them to agree to distribute the binaries in their own package, so that you can have them in a known location and your packages will be able to negotiate on versions. Without this, you’ll get broken/random behaviour over time, as each platform is going to handle the conflicts differently.

FWIW, this problem is the one that led to conda being invented, so it’s not completely unsolved. It just takes a lot more agreement between all the players (or alternatively, a third-party to rebuild everything with some agreement) in order to get it to work properly.

1 Like

For the context, this is my first stroll down the road of packaging for python. It does not feel like a stroll, though, more like stumbling at different concepts and tools, with changes of those over time, support for different python version on different OSes… It took me off guard, to be honest.

If your aim is to distribute a wheel, the best you can do is put the copies of your binaries directly alongside the extension modules that use them. This should be supported on all platforms for your package in isolation.

Yes, this is the general idea, to go with something that could be installed with known tools like pip. I do have a “single package distribution” case covered for all three OSes. As soon as I add the second package to the mix, that needs to have access to first package’s support binaries their location is a major issue for me to tackle. I did miss the part of using pip install . and not python install as @uranusjr pointed out. Hopefully matters get better defined in terms of locations when going this way.

Placing the supporting binaries next to the extension module seems the most straightforward path. I guess I wanted to make a distinction between the support binaries and extension just to start complicating my life. I also have made that work, but I fear that the solution is on weak foundations because I’ve see the location of package change (.egg vs. no .egg folder). Again, I need to look into pip install ..

When you know that other packages are likely to exist in user environments that will conflict with yours…

The binding I’m trying to create and distribute can be considered non-existent at this point in time, but I see where the problems may arise by assuming that will not change in the future. Nevertheless, there does not exist an ‘official’ native packaging of the library/code I’m providing binding for, and also the upstream authors are not thinking of providing one (it is a couple of C++ files that are usually dropped into the end project folder and compiled into the final binary.) Of course, it is possible to have those compiled as a shared library and used as such, but the use cases are few and far in between; one of them seems to be a python binding, though. That being said, the likelihood of end user to have such library already installed, are very small, IMHO. At the same time I do not want to ask the user to install this shared library on her own. I also feel uneasy to place the library in a some non site-packages venv folder where it may conflict with potential future bindings.

I would also like understand a bit more how to approach the distribution of the headers (and possibly static libraries) from which the support binaries were built. Should a developer that needs to use those as dependencies in her project, find them in the wheel? Would they be normally distributed with the binary wheel? Or is there a concept of dev or src packages that I want to look into?

Currently I have two packages I’m working on, with one depending on another, and the support sources and binaries on hand. At the moment the dependent package build needs to have the dependee package in site-packages. I think that is OK, because I do not want to require other developers to deal with the dependee package source compilation and all to get their package built.

1 Like

To make matter a litter more complicated especially Windows system. Carry share .dll / libraries / data file with wheel would not enough to get thing work. It still require some part much install directly into the system ( Program files, Common Data, Register, so and so). People who make these kind of installer never have a mind of python, or it been work for many years and no longer active. However, python been a very general easy to use language. So more likely python world need to work with these stuff instead of the other way around. With wheel, there will be a chunk of cases that would make everyone using python package struggle. python install was call in pip install for eggs package are giving a way for make this matter less of the show stopper.

Yes, what you describe is pretty standard. It’s why CPython (on Windows/macOS) includes OpenSSL shared libraries, SQLite statically linked bindings, and other third-party sources embedded in various places.

Ultimately, without an officially distributed package to depend on, you’ll have to make the choice yourself.

  1. statically link (so your library can use it, but others will be isolated from it)
  2. “public” dynamically link (so anyone can use it, but will have to rely on your library to get it)
  3. “private” dynamically link (same, but tell people not to use it, so it’ll only happen by accident)
  4. release your own dedicated package

Since Python build tools currently have no way to infer include/lib paths from installed packages, there isn’t really much to be gained by distributing dev files as a wheel (again, this is something that conda does support, because it’s a very useful thing to have). There are some tricks to inject into setuptools, but since that isn’t the standard anymore you’ll still need a way for other build tools to find the paths. Ultimately, it’ll probably be easiest for you to just distribute those as a .zip file somewhere (like a GitHub release page or similar).

Of the four options above, I’d recommend either 1 or 2. Choose 2 if other libraries might need to use the C APIs directly, or 1 if it can all go through your Python APIs. Otherwise 4 is good if it’s your package or if you’re prepared to support it. (I’d recommend against number three, because it will probably end up as two anyway.)

One of the project I use 2 and 3.
Where 3 is designed by me and install into the python package folder. But it need 2 which need to be installed. I had implemented with the together with automatics version scheme base from git. But now all of that disabled, so my only chose may be part of it go with wheel ( 3 ) and other had to go into Conda May be. But this like reinvent the wheel while I have so much other new work to get done would kill me.
Also to convincing everyone use python switch from pip install to Conda install is another nightmare ( I cannot change people mental without a super power )