Planning for an MSVC ABI break

A colleague of mine recently pointed me to this conda-forge discussion concerning planning for a Windows STL and MSVC ABI-incompatible release, which they estimate in the next 1-2 years. The source is a comment from Steven T Lavavej, the lead MSVC STL maintainer:

v14 => vNext will be a total ABI break (like VS 2008 => 2010 => 2012 => 2013 => 2015), including renaming all of the versioned DLLs. No OBJ/LIB mixing will be possible, and DLL/EXE mixing will work only if the DLL interfaces are ABI-stable (e.g. COM, or completely extern "C" with no trace of STL types, etc.). We’re going to change the representations of tons of types, and remove/change a ton of STL DLL exports.

It’s been almost a decade since Windows users have had to worry about MSVC standard library ABI stability, and since then we’ve seen the stable Python ABI adopted more, including on Windows.

Note that based on my (admittedly limited) understanding of the Windows platform stability guarantees, I believe that things that link against ucrt the “Universal C Runtime” should remain stable in a future vNext. Since CPython has everything under “extern C”, Python interpreters should remain ABI compatible, I think. Even if that is the case, since many Python extension modules are written in C++ linking against the STL, it could be an issue to mix wheels that are linked to different versions of the STL.

@steve.dower or anyone else more knowledgeable than me, please do correct any inaccuracies in what I’ve said.

I wanted to start this thread to discuss a few questions I had on the topic:

  • Do we need to create a windows ABI compatibility tag in wheels?
  • Does CPython need any changes because of the MSVC ABI break? (I predict not, but could be wrong!)
  • Is there any way to prevent or detect mixing of STL versions if we choose not to create a Windows ABI tag in wheels?
3 Likes

I believe that the design goal for msvc libraries is that it’s fine and expected for different modules in the same process to be linked to incompatible msvc libraries, as long as they pass the library objects directly between them. So in theory this shouldn’t be an issue for us at all. Probably someone out there is some local interface to directly pass c++ objects between two python packages, but since that’s a private contract between those packages it’s up to them to decide how they want to handle any incompatibilities.

1 Like

@ethanhs , could you perhaps fix the link to the comment? It currently doesn’t seem to point anywhere. The URL for the comment is vNext: Make the abi-breaking-changes branch available on GitHub · Issue #169 · microsoft/STL · GitHub

I’d add another question: does CPython want to switch its binary builds to the new ABI at some point? Are there any advantages in doing so?

1 Like

Also cc @h-vetinari , who initiated the conda-forge discussion above.

Sorry for another post here, but now that i’ve read vNext: Make the abi-breaking-changes branch available on GitHub · Issue #169 · microsoft/STL · GitHub in full, it seems the ABI switch is still hypothetical? That is, Microsoft is considering it, but it’s a management decision they haven’t made yet (there are obviously tons of downsides in addition to the ups)?

Thanks for the ping. I’ve been following the discussions around vNext for a while (due to expected impact), and while nothing has been announced yet, the language has shifted fairly dramatically from caveat-laden caution to “this will happen”. C.f. here for a more recent discussion in the STL repo on this.

That itself is not a concrete sign, but there are others. I’m pretty certain the STL maintainers are not at liberty to confirm or deny anything (and that probably also goes for @steve.dower in his role as a MSFT employee), so before there’s an official announcement, this is likely the best information we have for now.

I wanted to start this discussion for conda-forge because it’ll be a big lift there in particular, and we might want to attach other changes/clean-ups that are only conceivable in a “rebuild the world” scenario, which happens extremely rarely, some of which might need a long lead time to prepare in and of themselves.

My undestanding is that the UCRT effort went hand-in-hand with the switch to ABI-stable VS releases, so I’m not sure if that will actually stay compatible. I’m hoping MSFT will not take 1-2 decades to implement C23, and if turns out they need to break ABI at some point (based on a very narrow definition of it, e.g. “msvc only supports C string constants with a length of up to 64kB”, and changing that is an ABI break), then vNext would be the time to do it.

Python did have a matrix about which version got built with which VS version on windows, which AFAIU wasn’t changed over the lifetime of that version due to ABI concerns (and in practice, distributors like conda-forge matched that version), which was part of the reason why SciPy had to restrict itself to whatever VS2008 supported while Python 2.7 was still around. I haven’t followed what the current status there is (it became less relevant with the ABI-compatibility of vc 14.x I guess), but I’m I think the chance is high this’ll impact cpython if/when it happens.

1 Like

For clarity, I have only had a single internal discussion related to this, and it was specifically someone warning me that the constexpr change was going to break people. This is the first I’ve heard of a 2 to 3v14 to vNext[1] transition for C++.

What is pretty clear is that it’s only the C++ ABI, so CPython itself is almost certainly unaffected. The most we’ll likely do is to include both vcruntime140.dll and vcruntime_vNext.dllin the distro as soon as they’re available so that packages can assume it.

The rest of the likely change is for msvcp140.dll and other C++ libraries (concrt140.dll and such, which most people have never encountered). These are never included with CPython, and AFAIK Conda has the C++ runtime in its own dependency, so packages that rely on it will get a version matched one specifically. It shouldn’t hurt to have v140 and vNext DLLs side-by-side, so hopefully that’s encoded alright in package names.

More recently, I’ve been suggesting that extension modules statically link their C++ runtime. The tricky thing about C++ (and the STL in particular) is that half of it is in header files and the rest is in specific functions that might be used by these headers, but the way the names are encoded adds a binary dependency on the shape of the types used in the header. So changing any part of the name or structure of a private class implementing part of a template may turn out to be a binary change, even though it’s mostly compiled away in the user’s actual program. When you statically link the C++ runtime, these just become your functions, so no binary compatibility to worry about. And typically not as much size increase as including the DLL, though it depends how much C++ you’re using.

I don’t expect we’ll see many of the same issues we had before v14. My summary (at this stage, based on what I know) is:

  • UCRT will not be changing/breaking
  • distributors that include the C++ runtime (i.e. Conda) should plan to include both v140 and vNext
  • packages that bundle the C++ runtime should stop, regardless of any ABI break :wink:
  • packages that statically link their C++ runtime are likely fine

  1. vNext is a Microsoft term for “the next version we’re going to release”. It’s a convenient placeholder, since “Visual Studio vNext” isn’t going to set expectations like putting a year in there, and typically marketing makes last minute decisions about the actual names anyway. I usually avoid using vNext around here, even though I’m thinking it, but in this post it seems to be in context. ↩︎

5 Likes

Apart from static linking how does a package get msvcpXXX.dll if it is not bundled?

Also you mentioned that statically linking the C++ runtime might not have as much size increase but is that true if the package has 100 extension modules?

Ask the user to go to https://aka.ms/vcredist and install the latest installer that’s appropriate to their machine. Or you could try and provide a more direct link if you want. (Or use Conda and add a dependency on the redist package.)

When I looked at matplotlib,[1] it was using exactly one export from msvcp140.dll. If that’s 1KB of code, then you’d need 500 extension modules to reach the size of the whole DLL. So it really is a calculation that depends on how much C++ you’re using, and generalisations are going to be not very useful.

I think I’ve only ever come across one project that had 100 extension modules, and there would’ve been more size benefit in refactoring into their own helper DLL first before worrying about STL stuff.


  1. Though I also looked at some other projects around the same time, so it may have been one of them. ↩︎

How does that work when several extension modules must share the same C++ objects? Is that a supported use case when the C++ runtime is statically linked inside each module?

I’m sure there are edge cases (comparing addresses of member functions may not work, and it’s possible that reflection or dynamic casting may get weird, though I’d expect that needs a workaround anyway), but in general the C++ DLL is going to be less stateful than the C one. Most C++ state is going to be stored in locals, which means it’s all in the header files, and functions that can be imported from a DLL will have to be pure or use type erasure.[1]

To be clear, though, I am assuming they were all built at the same time, and so have the same version of the C++ library statically linked. If you mix versions, then yeah, you’re probably going to have a lot of trouble.


  1. The type name gets put into the exported function name and the implementation will assume the layout, which means you get matching based entirely on the name. But I don’t think e.g. dynamic_cast<> does name matching at runtime. ↩︎

1 Like

Both these assumptions (built at the same time & statically linked) are not true for conda-forge, which is why we’re going to be affected. We won’t mix versions, but we’ll need to build things twice for the before/after.

1 Like

The premise for these was that extension modules are passing C++ objects between them, which I suspect is incredibly rare. I’m not aware of any PyPI packages that do this - nobody wants C++ in their public ABI.

That’s IMO not a rare case, e.g. anything involving protobuf, grpc, abseil etc. is potentially affected, where C++ objects are explicitly part of the API.[1]

These underlie a bunch of python packages too, but more importantly, conda-forge is not just concerned with python packages.


  1. protobuf even uses abseil types in its API, which additionally may even depend on the C++ standard used to compile. ↩︎

2 Likes

If you say so, but I thought the point of these tools was to have an efficient serialisation format? And so you wouldn’t be passing pointers to live C++ objects and expecting non-virtual methods to Just Work?

Abseil types are being passed around in protobuf. The abseil devs only consider the “everything built built consistently (including C++ standard version)” case as supported, but that hasn’t stopped protobuf from using these types in its API.

Consequently, protobuf itself and then stuff on top like grpc can be affected, depending on how deeply their API is used. I don’t doubt that there’s ways to use it that won’t break, but at scale we’re going hit the full ABI surface, so we can’t play fast and loose with hoping for compatibility, unless there are really concrete guarantees.

2 Likes

PyArrow has several extension modules which all access the same types of C++ objects, more or less. It’s a single project/distribution for now so we can make sure we use the same C++ ABI under the hood, but we’d actually like to split components into several distributions.

2 Likes

Thank you all for your comments, to summarize the discussion so far, if I understand correctly:

  • Projects should not be dynamically linking to the C++ runtime
  • Packages using extern C/the Python C API shouldn’t see C ABI breakages since the UCRT is not changing/breaking, and there won’t be changes to the C platform ABI.
  • Because of ^ wheels don’t need any special changes to handle this change
  • CPython may need some changes, but that is opaque to packaging
  • Packages that statically link to different versions of the C++ runtime should be careful when interacting with other packages that statically link to the C++ runtime

I will sleep better knowing this change shouldn’t cause breakage when mixing wheels :grin:

1 Like

PyPI wheels should not be. Projects certainly can, and should, provided that they’re being built in a consistent (C) environment with the rest of the eventual (Python) environment (such as Conda, or any build-from-source configuration). It’s literally only PyPI that is an issue here,[1] because contributors build independently.

And also unrelated to C++. This would only be if/when a new vcruntime DLL starts appearing (and I’m pretty sure my code there uses vcruntime*.dll as a glob, so any new version will be included pretty much automatically).

Otherwise, your summary seems fine to me.


  1. I guess if someone built a Conda package without relying on an existing Conda package containing the C++ runtime then it would also be an issue. That’s likely more work than doing it right, unless the project has modified their sources so thoroughly that the Conda recipe can’t bypass it. ↩︎

1 Like

Is there an easy way to check and ensure that they do not? Which MSVC flags would be involved?