Great detective work! That’s definitely a big step towards figuring this out. But, the biggest mystery is still unresolved: each python extension loads in its own ELF namespace, so even if two extensions have conflicting symbols, it shouldn’t matter, because they can’t see each other’s symbols. But somehow in the tensorflow/pyarrow setup, something is breaking this isolation. How is that happening?
There’s a possible problem with your reproducer script: your main program that calls dlopen is written in C++, while in the real CPython case, the main program that calls dlopen is written in C. The reason this could be a problem is that when the main program is written in C++, it basically LD_PRELOADs all the system’s libstdc++ symbols on top of every module you dlopen. (Yes, this is super weird. I didn’t design it…) When the main program is written in C, this doesn’t happen. So probably the next step is to rewrite the reproducer in C and see if the same problem occurs or not.
Right, procuring all the libraries that you want to vendor into your wheel is definitely a challenge – CC @sumanah for an example of a challenge facing scientific users. One of the ideas that’s come up in the past is to use conda-forge as a source of vendorable, prebuilt binaries, e.g. by setting up a conda env and then building the wheel and running auditwheel there. I’m not sure exactly which manylinux version conda is targeting right now – it might be one of the new ones we need perennial manylinux to define – so someone would need to figure that out too. Otherwise though, this is getting into a different topic entirely from manylinux PEP definitions, so we should probably continue the discussion in a different thread.
Right, but those are coming from glibc, and everyone is supposed to be sharing a single copy of them. Our working hypothesis is that pyarrow has a vendored copy of std::once that was automatically inserted by the manylinux compilers (as part of their hacks to make it possible to use new C++ features on systems with an old version of libstdc++), and that tensorflow is using the version of std::once from the system’s copy of libstdc++, and somehow these two copies of std::once are interfering with each other. None of that applies to pthread_once.
I didn’t know dlmopen, but I’m discovering this interesting snippet in the dlopen / dlmopen man page:
The dlmopen() function also can be used to provide better isolation than the RTLD_LOCAL flag. In particular, shared objects loaded with RTLD_LOCAL may be promoted to RTLD_GLOBAL if they are dependencies of another shared object loaded with RTLD_GLOBAL.
Any update on this? It’s been a bit over a month since you made that comment, and manylinux2014 is moving along. I don’t want to get into a debate over when the rollout will be considered to “have happened” but hopefully you at least have a better idea of when you expect a proposal to be ready (in ballpark terms if nothing else).
We tried a couple of avenues for finding funding quickly, but none of them panned out. We are going to have to write a grant proposal to someone. Due to other commitments and travel, the earliest we can start on that is going to be November, and I can’t say when the funding will actually materialize.
Having thought about this some more in the interim, though, I now agree with @njs that, if the ultimate solution for mysterious C++-related crashes requires changes to the manylinux specifications, those changes will be orthogonal to the changes from manylinux20xx to perennial, and therefore this outstanding bug should not be a blocker for perennial.
I still think the perennial PEP should be deferred until after the manylinux2014 transition is complete and we have seen all of the fallout from it. “Complete” means something like “more than 66% of all wheels downloaded from production PyPI daily, to Linux, containing compiled code, are manylinux2014 wheels.” This is entirely because of process-related unknown unknowns; I don’t think we can be confident of being able to tell whether perennial is completely specified until we see the manylinux1 → manylinux2014 transition play out in production.
The more important driver for a transition, AIUI, is that the build environments for the older tags are based on versions of CentOS that are either past (ml1) or nearing (ml2010) their end-of-life date.
At some point in this very long thread it was suggested that the new build environments should tag wheels with the older tags when they don’t need any newer library features (it should be possible to detect this automatically). I don’t know if that actually got implemented. Assuming it did, though. that would indeed be a reason why my suggested definition of “complete” wouldn’t work. Let me try again:
I claim we won’t have enough information to judge whether the perennial proposal makes sense until both of the following are true:
A supermajority of the daily connections to production PyPI, by pip running on Linux, were a version of pip that understands the manylinux2014 tag
A supermajority of the wheels on production PyPI that contain compiled code for Linux have had an upload, with a “final release” version number, that was compiled in the manylinux2014 build environment
Once those are both true, we will need to canvass the community of people who have built wheels in the manylinux2014 build environment, and the community of people who have downloaded wheels for use on Linux, to find out whether there were any unexpected problems arising from the transition that need addressing by changes to the perennial process. (Probably we’ll hear about some problems in the form of bug reports on pip, auditwheel, the build environment, and specific compiled-code packages, but I don’t think we’ll discover all of the problems if we don’t do some outreach.)
Now that manylinux_2_28 is a reality, would it be time to start talking about the next manylinux version? Would that be manylinux_2_34, using glibc 2.34, based on a version 9 BaseOS? (maybe AlmaLinux 9?)
Currently multiple popular releases like Fedora 35 and 36 and Ubuntu 21.10 and 22.04 LTS already support glibc 2.34+. Also many rolling releases like Gentoo, Manjaro Stable and openSUSE Tumbleweed support it. Especially for Ubuntu 22.04 LTS a new manylinux image could be very useful, it’s also used a lot in CI.
I think the discussion for the base image of 2_28 showed that it’s good to stay based on RHEL and its derivatives (if only already for the devtoolset backports!). They should all be ABI-compatible anyway, though there’s slight variations (see OP of that discussion). Provided that rhubi 9 comes with a current devtoolset, I think that would be the most attractive option.
It would also continue the pattern so far:
There’s a pretty large glibc-gap between RHEL 7 & 8, but the debian-based 2_24 was struggling with adoption particularly due to the lack of modern compilers, and is almost EOL, so I’m not counting it.
All that being said, I think there’s really no rush for this. The motivating/constraining factor here is not CI usage, but how many users have a new enough glibc (& pip) to consume manylinux_2_34 wheels. The answer is about 10% of python 3.10 users, and microscopic amounts of all other python users. That is not a reasonable user base (currently) to justify the maintenance effort for most projects to publish such wheels (in constrast, glibc 2.28+ is quite-to-very widespread for everything but python 3.7, which is starting to drop off the radar anyway, cf. e.g. NEP29).
That’s only for python 3.10. Across all consumers it’s <1.5%. Leaving out python<3.8, it’s still well below 10%.
The good thing is that there’s less to do these days. It’s comes down to making a choice of RHEL clone (small differences, nothing major), and this time we don’t have the end of CentOS-as-we-knew-it to redigest. Also pip etc. won’t have to be updated anymore (thanks to PEP 600) for new wheels.
You can open an issue on the manylinux tracker of course, but IMO you’re still way too early on this.
any of rhubi, alma, rocky or centos stream would work in principle ↩︎
I may have spoken too soon on this… RHEL recently announced that they will not make their sources available anymore (outside of CentOS Stream), which basically breaks the model of Alma, Rocky, etc. There’s an ongoing elaboration if / how Alma can continue. If it can’t we might have to switch everything over to rhubi (which at least now has the devtoolset backports that ruled it out as a candidate at the time).
As a small update here, with the help of @mayeut, manylinux-timeline now has a plot that shows the distribution of glibc-versions only for those downloads that are on Python versions that aren’t yet EOL. And since Python 3.7 became EOL per the end of this month, we can now update these numbers.
In short, 90% of Python ≥ 3.8 users now have better than glibc 2.28, and ~15% have better than glibc 2.34 – the latter still quite far from indicating ecosystem readiness for a potential manylinux_2_34.
That’s an average across many projects of course, and it could be conceivable that specific cutting edge projects might want to start using newer glibc features sooner than the overall average rises above a given threshold. But that would then be on the project to show that its user base has broad support for 2.34.
As a concrete example, I recently stumbled across some download numbers for pillow (that @hugovk thankfully collected), and the distribution is as follows:
% (within 3.x)
2_17 == 2014
% (within 3.x)
I found this interesting because it illustrates the strong skew across python versions, and how prevalent glibc < 2.28 still is.