Pre-PEP discussion: revival of PEP 543

Hi all,

We (@jvdprng and myself) are planning to resurrect the previously withdrawn PEP 543 as part of our work with the Sovereign Tech Fund. To this end, we have looked at the history of this PEP, considered what has changed since it was actively worked on, and are planning to open a PR for a new PEP with some proposed changes that we will summarize below.

We believe that the problems that PEP 543 aims to solve remain largely the same. The PEP lists five bullet points, and since its withdrawal, only the fourth bullet point related to the use of system trust stores has been addressed (by the truststore package).

The goals are roughly the same as described in PEP 543:

  1. To provide a common API surface for both core and third-party developers to target their TLS implementations to. This allows TLS developers to provide interfaces that can be used by most Python code, and allows network developers to have an interface that they can target that will work with a wide range of TLS implementations. As an additional goal, this API should be as simple as possible, preventing users from making insecure choices.
  2. To provide an API that has few or no OpenSSL-specific concepts leak through. The ssl module today has a number of warts caused by leaking OpenSSL concepts through to the API: the new Protocol classes would remove those specific concepts.
  3. To provide a path for the core development team to make OpenSSL one of many possible TLS backends, rather than requiring that it be present on a system in order for Python to have TLS support.

Our draft PEP includes the following changes to the interfaces compared to PEP 543:

  • Remove the concept of a wrapped socket. Based on this post on the Python discussion boards and the following post, it seems that this concept is no longer desirable and/or necessary. Instead, the backend will create a socket-like object, wrap it, and return it to the user. The socket should be non-blocking to enable sans-io abstractions.
  • Change the ABCs to Protocols, and add extensive typing information.
  • Remove support for all deprecated features:
    • All protocol versions except TLS v1.2 and v1.3
    • Next protocol negotiation using NPN
    • Removed all TLS v1.2 cipher suite definitions except for modern, forward secure ones (though users can still use older cipher suites by providing the integer value)
  • Disallow insecure user choices:
    • Server authentication is always enabled for clients. It is not possible to disable this.
    • Clients will either use the system trust store by default, or their own if provided.
  • Simplify the API further by:
    • Splitting client and server configuration. Due to the asymmetry in client versus server authentication, we believe it best to split these two objects.
    • Removing the SNI callback function. The implementations should transparently handle SNI based on a set of certificates provided by the caller.
    • Removing password-protected private keys as an option. It violates separation of concerns and compels backends to support a something that’s orthogonal to TLS itself (PKCS#8 decapsulation with several very mediocre KDFs).

Any feedback is welcome! We hope to understand whether any of our proposals would negatively impact current users of the ssl package or future developers of TLS-backends.

We are also wondering whether this should be in a single PEP or whether it should be split into two PEPs: one focusing exclusively on the client side (i.e. server authentication) and another focusing on the server side (i.e. client auth). The client side (= server auth) is likely the most urgent, the most impactful, and the least complex, and we’re happy to scope this PEP down to just those components if doing so will make things easier for the CPython team to evaluate!

(Our current plan is to submit the text of the draft PEP itself once we get a PEP-Delegate here. But if it’s easier/more correct to submitted the draft and share our example implementation before that, we’re happy to do that as well! Also, please let us know if we’ve mis-categorized this by putting this into the Core Development stream instead of the PEPs one.)

+CCing @sethmlarson in his capacity as SDIR and truststore maintainer as well :slightly_smiling_face:


Why expose a socket object at all (which I assume implies it will issue actual IO calls, even if non-blocking), instead of a purely in-memory channel object that can then freely be reused for socket-like objects?

The ssl module already provides such a in-memory channel object as SSLObject, and it is used for example by asyncio including for the IOCP-based event loop on Windows (which probably wouldn’t be able to make use of a “non-blocking SSL socket” object: before SSLObject, asyncio had to use select on Windows for non-blocking SSL operation).


I skimmed over this point on my first read and was rewriting my own post that it links to :smiley: Thanks Antoine for quoting it and saving me from embarrassing myself!

I think it’s most important to choose the level to define this API at - specifically, what features does the returned socket-like object need, and why are they greater than a file-like object? And what creation arguments does it need, and why are they different from a URI string?[1]

There’s not really any way we can add this that won’t require users to adapt, so sticking close to any existing API isn’t going to save much effort. Making sure we can easily use it in asyncio[2] and urllib.request (and any 3rd party libs who come along for the ride) seems more practical. Particularly if we can design an API that somewhat naturally works in both sync and async.

  1. I can think of a few ways they’re different, so I’m not suggesting a URI string is sufficient. But I think we’re better off starting small and adding what people need than starting with “all of OpenSSL” and trying to trim it down. ↩︎

  2. Potentially in a new loop type, depending on how the compatibility works out. ↩︎


I suspect this may be worthwhile structure for those of us who don’t live deep in this area. Even if practically there’s significant overlap, a lot of things will conceptually be clearer if the “if/else” is either in the title or in a clear section header rather than in every other paragraph.

From an implementer or consumer POV there can also be a difference. It’s probably not unreasonable for a backend to want to only implement client auth, or only care about server auth, especially when you start taking into account custom protocols (which I assume would have a chance of using this interface?[1]). Implementing both would be the norm, I expect, but I suspect it’s a case where treating them separately only emphasises the parts that are common.

  1. And yes, we’d rather hardware manufacturers would stop inventing their own protocols and just use TLS for everything, but I personally don’t know how to stop it, and would love it if they chose Python for first-class support because we provide the best extension experience. ↩︎


Given the complexity of the proposal, I think it would be better to have this implemented as a PyPI hosted package where it can then prove its usefulness before we can then consider deprecating the ssl module and eventually replace this with a new tls module following the PyPI package’s logic.

I’d also like to chime in with the others in that evolution is typically better than revolution. Gradually changing the ssl module to learn about native TLS sub-systems would probably be easier to sell than completely replacing one API with another.

A side note:

This strategy is not going to fly: Users will always have to be able to adjust configurations to permit connecting to e.g. internal servers which don’t provide trusted certificates. If we remove this possibility, we’d make it impossible for Python to connect to many IoT devices or devices and servers on LANs, which often work with self-signed certificates, making it much less attractive to use for LAN use cases such as home/company/factory automation, sensor monitoring, etc.


I have run into situations already, outside of my control, where I had to disable server authentication as a temporary measure until a third party got their act together. While I certainly agree that by default server auth needs to be enforced, there should be a mechanism for users to explicitly bow out of it.


I concur with Marc-André: starting as a third-party implementation sounds like a more pragmatic plan. Once the implementation is rock solid, you can consider to set the API in stone and go through the complex PEP process.

I think that it’s a reasonable thing to give the user the choice to opt-in for an insecure connection.

wget has --no-check-certificate option, curl has --insecure option. Web browsers also give the choice to users for that. It’s more a matter of documentation to explain clearly the risks.

There are enough situations where you trust your network and the host (ex: localhost), you cannot change the broken server security (or it’s just not worth it), but you really need to retrieve information from the server. Testing/development is a good concrete example of that use case.

Right, it’s more complicated to use/write such API, but it covers more use cases. There are also APIs design “Without I/O” or “sync and async”. See:


My rationale for this is programmer expectations, although I recognize that my expectations as a programmer are not necessarily others’ :sweat_smile: – the file-like socket abstraction is a familiar “high-level” interface for both junior and experienced engineers, whereas in-memory channels generally (IME) come with experience or familiarity with async programming idioms.

That being said, I agree that the socket abstraction doesn’t necessarily play well with the current best-practice Windows (and macOS) TLS APIs. @jvdprng and I will look at our current WIP implementation and see what it would it would take to turn it into a channel-style API (which lends support to the argument below that this should be iterated on publicly on PyPI).

Thanks for calling this out. I see (at least?) two conflicting pressures in a design here:

  • “External” consumers of a replacement ssl module will probably principally use it for HTTPS. For these users, a URI string and a file-like object are the appropriate level of abstraction (modulo async cases).
  • “Internal” consumers are more varied (to my understanding): CPython has support for SMTP over TLS, IMAP over TLS, etc. and these protocols don’t necessarily have standard URL schemes. For these, the appropriate level of abstraction is probably (hostname, port) and potentially a “richer” file-like object that exposes attributes (e.g. for the peer certificate).

Sounds good. @jvdprng and I have a bit of cleanup to do, but we’ll plan on making a release of our initial API design (and demo adaptation on top of the current ssl module) and we’ll link it here :slightly_smiling_face:

These are good points, thank you! I agree that we’ll need to accomodate these cases; from a quick survey of what other modern TLS APIs do (e.g. Go’s), some kind of insecure_skip_verify=True kwarg or similar would probably be appropriate for this.

Thank you to everyone who has responded so far! To summarize the consensus responses (please correct me if I get these wrong or miss one), it sounds like myself and @jvdprng need to:

  • Re-evaluate a channel/in-memory object design rather than a socket-like design, to better accomodate async cases
  • Determine an appropriate “level” for this API to operate at, or multiple layers if internal/external usage of TLS APIs is sufficiently varied.
  • Ensure that our design includes accommodations for unverified TLS connections, a la curl --insecure

Per the recommendations about iterating publicly in prep for the actual PEP, we’ll do some cleanup and plan on making our repository public (along with a PyPI release) in the coming days (I’d commit to doing it today, but I’m at a conference :sweat_smile:)


How do I set the TCP options on the socket that is inside the implementation? For example to turn on or off nagel argorithm?

If we land on a socket-like API: this would be done with a setsockopt or similar method. The absence of a wrapping API would not prevent this, since the socket itself (or the thing pretending to be a socket) could still receive options.

If we land on a pure channels/memory API: this would be the responsibility of whatever socket/networking layer you connect to the TLS object(s). I’m a little bit less familiar with async network programming, but my understanding is that you can poke at the underlying socket object with asyncio at least (and IIRC, recent versions set TCP_NODELAY by default).

Yeah, another aspect worth designing here is extensibility, and particularly “how can I set options that I know the implementation understands but aren’t baked into the API specification”. Potentially “skip verification” could also fall into this category, so that if someone wanted an implementation that chose not to support the option, they could still be compliant.

It strikes me that this layer might be the right place to mock out network connections for tests, or tracing, or even some clever caching (i.e. there’s no reason to assume that behind the API is a real socket or a real network - it might be a fake one). Not sure how or whether that would influence the design, but it might make some decisions more obvious. (I don’t think is this “sans IO”, but rather, it’s enabling/leading the user of the API to be sans IO. An implementation of the API would be as much IO as they like.)

1 Like

There are some protocols that can be upgraded from TCP to TLS after a few in the clear messages. Being able to wrap a socket supports this style of protocol. But those protocols I know of I think are not considered good from the security point of view. Maybe you consider this out of scope.

Yes, I think that’s a good summary. The protocols that I’m aware of that allow the STARTTLS pattern also have widely adopted “pure” TLS variants that are considered much more secure, and as such it’s my opinion that an improved API here should consider the old pattern out of scope.

(This won’t make STARTTLS impossible in Python, since third party libraries can continue to support it.)

Will what you are proposing support asyncio? What would be the consequences for asyncio’s existing ssl support?

I guess you are thinking of email protocols used by MUAs. For those, several implicit TLS protocols and ports have been standardized: RFC 8314 - Cleartext Considered Obsolete: Use of Transport Layer Security (TLS) for Email Submission and Access

However, port 25 ESMTPS communication will not go away anytime soon (which uses STARTTLS to enter TLS), since this is the default way mail relays (MTA) communication with each other, so I don’t think dropping logic for upgrading a plain text connection to TLS is an option for a general purpose language such as Python.

That said, I also don’t see a reason why we can’t support both: an implementation wrapping an existing socket and one which implements implicit TLS directly as a socket.

1 Like

Something else I’d like to draw attention to, which hasn’t come up in the discussion yet (at least as far as I understand it):

The PEP mentions:

…this PEP is withdrawn due to changes in the APIs of the underlying operating systems…

as reason for its withdrawal.

Could you elaborate on the stability of the OS TLS APIs ? If those change every few years, we may end up with a situation where the new TLS abstraction cannot be implemented anymore due to such a change.

Also: Even if the TLS abstraction can be adapted, the layer would still have to continue to support older OS TLS APIs for quite some time.

This sounds like a risk for a stdlib module and a scenario which is much better handled by a PyPI package or suite of packages.

Aside: I am a great fan of OpenSSL and the independence it offers over commercial OS level TLS implementations (e.g. no obscure certificates stores, closed source security implementations, lagging and sometimes intransparent CA root list updates, cipher/hash suite choices subjected to regulations, etc.), so my position may be somewhat biased :slight_smile: I can fully understand why you’d want to use OS level TLS implementations, but also find it very important to have a choice.

Given that OpenSSL will be supported for unix world, it becomes a question of also allowing Windows and macOS python to be built against OpenSSL as an alternative to the native TLS stack.

1 Like

My thinking here is that the design will be suitable for “drop in” use within asyncio: I will defer to people who are more familiar with async programming here, but my thought is that the current loop.create_connection() API will be able to take either a new tls= kwarg or re-use the existing ssl= one, with additional support for these new interfaces. Under the hood this would then create a nonblocking socket (or memory channel), retaining the current asyncio connection APIs around TLS/SSL.

(Some related abstractions might need to adapt slightly, since one of the proposed design improvements here is reducing the number of abstraction/OpenSSL-specific details that get leaked through Python APIs. So, for example, BaseTransport.get_extra_info() may need to return different/reduced information with these new APIs versus the ssl ones).

My understanding is that SMTPS (i.e. TLS-initiated SMTP) is significantly more popular than opportunistic encryption with ESMTP, and has been for several years. Gmail, for example, has supported SMTPS on port 587 for at least a decade, as has Outlook and other large hosted email providers.

Particularly, I’ve checked that each of the following supports SMTPS:

  • Gmail
  • Outlook
  • Protonmail
  • Fastmail
  • Yahoo
  • Zoho

(Together, I think these constitute over 90% of SMTP handling on the public internet, and I think it’s safe to say that the next ~two dozen largest SMTP hosts also support SMTPS. But I will substantiate this more concretely.)

I see this not as a matter of “can”, but “should” :slightly_smiling_face: – Python is at this point idiosyncratic among general-purpose programming languages when it comes to the “socket wrapping” pattern: the only other major language that I can find that supports it (but does not encourage it) is Ruby, which does so for similar reasons (having a public API that closely reflects OpenSSL internals).

The reasons for this are AFAICT related to the above: the security model of opportunistic encryption is harder to explain to users, is subject to uncontrolled downgrades that compromise transit integrity/privacy, and is also more misuse-prone. It’s also a “speedbump” in turns of current networking idioms: a lot of internet traffic is TLS at this point, so making programmers navigate two layers of abstraction (sockets and TLS) to get to a functional I/O object makes Python harder to use than strictly necessary :slightly_smiling_face:

With that being said, I think the design constraints proposed above would make it relatively straightforward for a user (or library) to build their own socket-wrapping primitive, especially if we settle on a memory channel design. IMO this is the best of both worlds, in that it aligns Python’s TLS APIs with other major languages (e.g. Go), reduces the overall API surface (wins for both security and maintainability), and doesn’t outright prevent the wrapping case (just doesn’t expose it directly as a public API).

Sure! I’ll ask @jvdprng to fill in some more details from his research as well, but my understanding is that the changes in question were primarily macOS’s deprecation of Secure Transport in favor of Network.framework.

The former was OS sockets based while the latter is event-driven and uses abstract “connection” handles that aren’t necessarily tied to a socket or other specific transport interface. My understanding is that this change happened roughly around the same time that the two original PEP 543 authors were no longer able to find time to work on it, so it’s less that the changes are insurmountable and more that they were inopportune and required a larger architectural rethinking (e.g. towards memory channels, which is where we’re trending now).

I agree with the points about risk and adaptation. At the same time, I think it’s important to note that both this proposal and the original PEP don’t require that Python use the OS’s TLS stack: both are intended to put forward a simpler and easier to maintain abstraction, one that’s compatible with OpenSSL without exposing OpenSSL’s internal implementation details like the current ssl module does. This IMO is a sufficiently good reason to consider a new TLS API within the standard library: CPython can choose to continue to use OpenSSL (and only OpenSSL) under the hood, but can do so more confidently knowing that it controls the abstraction presented to users without leaking (potentially changing) implementation details from OpenSSL itself.

I also agree with your point about independent TLS implementations, and that choice is important here! I see this proposal as deepening user choice, since it doesn’t preclude CPython continuing to use OpenSSL but also provides a reusable abstraction that other OSS (or vendor) TLS stacks can satisfy. Per above, IMO it’d be a “win” for TLS in CPython even if no non-OpenSSL backends emerge, solely because a fixed interface with fewer abstraction leaks will be easier to maintain and attract ongoing contributor attention to :slightly_smiling_face:


Yes, this is my understanding as well. However, since we only started looking at this topic relatively recently, it is difficult to comment on the stability of OS TLS APIs over multiple years. What I can say is that, with the exception of the Secure Transport / Network.framework transition, all other remarks regarding other TLS backends from PEP 543 (Windows SChannel and Mozilla NSS) still appear to be accurate for current versions.

As with the original PEP 543, the goal is to ensure that this interface will be used by the portions of the standard library that interact with TLS, including asyncio. In order to achieve this we must ensure that it can be used, which includes providing the required support. We aim to try and make it a drop-in replacement as much as possible, but we want to avoid making it a 1:1 replacement in places where this implies inheriting OpenSSL concepts in the new TLS API.


Thanks so much for opening this discussion Will. What I’m hoping this PEP will achieve is to reduce maintenance work for core developers, library maintainers, and users by pushing the details to the operating system so sysadmins can properly configure systems.

The long-term end goal in my mind is using this functionality in the stdlib automatically and one day not shipping OpenSSL with Windows and macOS installers. This would save us from having to make security releases for OpenSSL vulnerabilities. To start it makes sense to ship as a standalone library like Truststore to prove the functionality and API.

I saw ciphers were mentioned, IMO ciphers shouldn’t be controllable within the program, instead deferring to the configuration of the system. This removes the need to add support for new ciphers or having CPython/libraries be responsible for which cipher suites are offered by default (in the case of ciphers becoming broken).