Simple https client in standard library

I’m missing an easy API within the standard library to perform simple HTTPS requests. Something like this:

import http


resp = http.request(
    url='https://example.com/',
    method='POST',
    json={'firstname':'John', 'lastname': 'Doe'},
)


if resp and resp.status == 200:
    print('Ok')
else:
    print('Error')

I’m aware of existing packages like requests, httpx, aiohttp, and numerous other libraries that allow for HTTP requests in Python. However on embedded systems where Python is available, and resources are limited creating venv and installing megabytes of packages (with additional dependencies) is cumbersome.

Naive implementation:

from urllib.request import urlopen, Request


def request(url, method='GET', headers=None, data=None, timeout=5, json=None):
    headers = {} if headers is None else headers
    if json:
        headers |= {'Content-Type': 'application/json'}
        data = json.dumps(data)
    data = None if data is None else data.encode('utf-8')
    request = Request(url, method=method, headers=headers, data=data)
    try:
        return urlopen(request, timeout=timeout)
    except Exception:
        return None

Of course this is just an example implementation to demonstrate the concept, please don’t quote me on that. But such simple code would be very useful already.

We might have also async version, logging, and maybe shorthand functions like http.get()or http.post().

How about making this possible to run via command line:

python -m http.request --post https://example.com --json '{"firstname":"John","lastname":"Doe"}'

That would eliminate the need for installing curl or wget

What do you think?

6 Likes

I believe all HTTP functionality in the standard library only supports HTTP/1.1. In other words, it’s fine for learning or local environments but not well-suited for production use."

To ensure future-proofing and high performance, an HTTP client needs HTTP/3 support; without it, the client may face limitations in modern environments. Developing a full-featured client today exceeds the capabilities of CPython’s standard library and would have been more appropriate in the HTTP/1.1 era.

I would use the requests package from PyPI for this use case.
Why does it need to be in the stdlib?

1 Like

It sounds like there is a need for small tightly focused packages for these constrained environments. Instead of asking the stdlib to fill that need, it would be better to publish special-purpose packages on PyPI. Modules in the stdlib will naturally try to accommodate the broadest needs, and you’ve clearly described the special requirements of your use.

You have an implementation here in this post. It could be the start of the PyPI package.

3 Likes

[citation needed]. The whole Internet was built on HTTP/1.1. HTTP/3 might be faster on benchmarks, but I doubt there is any significant difference for 99% of applications.

You could say this about just about anything in the stdlib. Why do we need json in the stdlib if we can install something from PyPI?

The entire point of the standard library is to provide high-quality implementations of common operations. Making HTTP requests is certainly something many programs do. The standard library has urllib.request, which is quite low-level, and whose authors clearly consider their library inferior, since they tell people to use requests at the top of the documentation. So it would be good to have something friendlier available without having to install packages (which is often not easy in Python).

2 Likes

Servers are generally compatible a long way back. Did you know that a lot of servers are, even today, quite happy to respond to an HTTP 0.9 request? Yeah, I don’t think there’s any problem with an HTTP/1.1 client.

Thank you very much for some initial thoughts.

I agree with both Chrises. This doesn’t have to be full featured http client. This role is for requests, httpx and others. I am here to humbly suggest, that in a huge number of cases, we just need to `GET` or `POST` simple information and that’s it. And in Python currently it is not that straight forward.

I agree with Chris W. why json, or tomllib (that’s might be even better example - PEP 680 – tomllib: Support for Parsing TOML in the Standard Library | peps.python.org ) was introduced in standard library? I remember the justification: this is to simplify basic workflow, and for more advanced uses there is a dedicated full featured and frequently updated package for that.

Introducing `fetch` in Javascript simplified a lot of things.

$ python3.14 --version
Python 3.14.3
                                                                                                            $ python3.14 -m venv .venv-requests

$ du -hs .venv-requests             
 13M	.venv-requests
                                                                                                                        
$ .venv-requests/bin/python -m pip install requests
Collecting requests
  Using cached requests-2.32.5-py3-none-any.whl.metadata (4.9 kB)
Collecting charset_normalizer<4,>=2 (from requests)
  Downloading charset_normalizer-3.4.5-cp314-cp314-macosx_10_15_universal2.whl.metadata (39 kB)
Collecting idna<4,>=2.5 (from requests)
  Using cached idna-3.11-py3-none-any.whl.metadata (8.4 kB)
Collecting urllib3<3,>=1.21.1 (from requests)
  Using cached urllib3-2.6.3-py3-none-any.whl.metadata (6.9 kB)
Collecting certifi>=2017.4.17 (from requests)
  Using cached certifi-2026.2.25-py3-none-any.whl.metadata (2.5 kB)
Using cached requests-2.32.5-py3-none-any.whl (64 kB)
Downloading charset_normalizer-3.4.5-cp314-cp314-macosx_10_15_universal2.whl (280 kB)
Using cached idna-3.11-py3-none-any.whl (71 kB)
Using cached urllib3-2.6.3-py3-none-any.whl (131 kB)
Using cached certifi-2026.2.25-py3-none-any.whl (153 kB)
Installing collected packages: urllib3, idna, charset_normalizer, certifi, requests
Successfully installed certifi-2026.2.25 charset_normalizer-3.4.5 idna-3.11 requests-2.32.5 urllib3-2.6.3
                                                                                                                        
$ du -hs .venv                            
 17M	.venv
                                                                                                             $ python3.14 -m venv .venv-httpx
                                                                                                                                  $ du -hs .venv-httpx  
 13M	.venv-httpx
                                                                                                  $ .venv-httpx/bin/python -m pip install httpx
Collecting httpx
  Using cached httpx-0.28.1-py3-none-any.whl.metadata (7.1 kB)
Collecting anyio (from httpx)
  Using cached anyio-4.12.1-py3-none-any.whl.metadata (4.3 kB)
Collecting certifi (from httpx)
  Using cached certifi-2026.2.25-py3-none-any.whl.metadata (2.5 kB)
Collecting httpcore==1.* (from httpx)
  Using cached httpcore-1.0.9-py3-none-any.whl.metadata (21 kB)
Collecting idna (from httpx)
  Using cached idna-3.11-py3-none-any.whl.metadata (8.4 kB)
Collecting h11>=0.16 (from httpcore==1.*->httpx)
  Using cached h11-0.16.0-py3-none-any.whl.metadata (8.3 kB)
Using cached httpx-0.28.1-py3-none-any.whl (73 kB)
Using cached httpcore-1.0.9-py3-none-any.whl (78 kB)
Using cached h11-0.16.0-py3-none-any.whl (37 kB)
Using cached anyio-4.12.1-py3-none-any.whl (113 kB)
Using cached idna-3.11-py3-none-any.whl (71 kB)
Using cached certifi-2026.2.25-py3-none-any.whl (153 kB)
Installing collected packages: idna, h11, certifi, httpcore, anyio, httpx
Successfully installed anyio-4.12.1 certifi-2026.2.25 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 idna-3.11

$ du -hs .venv-httpx  
 17M	.venv-httpx
$ python3.14 -m venv .venv-aiohttp 
                                                                                                                                                                                                                                                                     $ du -hs .venv-aiohttp  
 13M	.venv-aiohttp
                                                                                                                        $ .venv-aiohttp/bin/python -m pip install aiohttp
Collecting aiohttp
  Downloading aiohttp-3.13.3-cp314-cp314-macosx_11_0_arm64.whl.metadata (8.1 kB)
Collecting aiohappyeyeballs>=2.5.0 (from aiohttp)
  Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB)
Collecting aiosignal>=1.4.0 (from aiohttp)
  Downloading aiosignal-1.4.0-py3-none-any.whl.metadata (3.7 kB)
Collecting attrs>=17.3.0 (from aiohttp)
  Using cached attrs-25.4.0-py3-none-any.whl.metadata (10 kB)
Collecting frozenlist>=1.1.1 (from aiohttp)
  Downloading frozenlist-1.8.0-cp314-cp314-macosx_11_0_arm64.whl.metadata (20 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp)
  Downloading multidict-6.7.1-cp314-cp314-macosx_11_0_arm64.whl.metadata (5.3 kB)
Collecting propcache>=0.2.0 (from aiohttp)
  Downloading propcache-0.4.1-cp314-cp314-macosx_11_0_arm64.whl.metadata (13 kB)
Collecting yarl<2.0,>=1.17.0 (from aiohttp)
  Downloading yarl-1.23.0-cp314-cp314-macosx_11_0_arm64.whl.metadata (79 kB)
Collecting idna>=2.0 (from yarl<2.0,>=1.17.0->aiohttp)
  Using cached idna-3.11-py3-none-any.whl.metadata (8.4 kB)
Downloading aiohttp-3.13.3-cp314-cp314-macosx_11_0_arm64.whl (493 kB)
Downloading multidict-6.7.1-cp314-cp314-macosx_11_0_arm64.whl (43 kB)
Downloading yarl-1.23.0-cp314-cp314-macosx_11_0_arm64.whl (86 kB)
Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB)
Downloading aiosignal-1.4.0-py3-none-any.whl (7.5 kB)
Using cached attrs-25.4.0-py3-none-any.whl (67 kB)
Downloading frozenlist-1.8.0-cp314-cp314-macosx_11_0_arm64.whl (49 kB)
Using cached idna-3.11-py3-none-any.whl (71 kB)
Downloading propcache-0.4.1-cp314-cp314-macosx_11_0_arm64.whl (46 kB)
Installing collected packages: propcache, multidict, idna, frozenlist, attrs, aiohappyeyeballs, yarl, aiosignal, aiohttp
Successfully installed aiohappyeyeballs-2.6.1 aiohttp-3.13.3 aiosignal-1.4.0 attrs-25.4.0 frozenlist-1.8.0 idna-3.11 multidict-6.7.1 propcache-0.4.1 yarl-1.23.0
                                                                                                                        
$ du -hs .venv-aiohttp                           
 19M	.venv-aiohttp

Just to send one simple HTTP GET or POST request.

1 Like

You have two fairly good options here. First is the http module - unideal, but it’s part of the stdlib. And second is requests, which isn’t stdlib, but is extremely commonly available, so you may very well be able to just use it anyway.

It may not be a neat one-liner, but a two-liner using urllib.request isn’t that horribly unwieldy either if a simple request is all you want:

from urllib.request import Request, urlopen

with urlopen(Request('https://www.google.com')) as response:
    print(response.read().decode())
2 Likes

With GET requests you can be even much shorter than what you wrote. Simply urlopen(‘https://example.com‘).read().decode() and that’s it. Can you also make a POST request like that? With, headers (such as basic auth) json dumping and encoding? Nope.

But the problem is with POST request, headers, and json payload (where you need to set Content-Type etc. Or `application/x-www-form-urlencoded`. Your one liner becomes a tedious copy-paste and error-prone process for a language with battery included.

PS. I am teaching Python for last 10+ years. I have given 11.5 thousand training hours for 32k students. Believe me. Such simple addition would make a huge difference.

PS. Have you tried to use unittest or doctest for simple tests if API endpoint works (returns 200) (without having to install other test frameworks). Pytest is 10MB, Request 4MB… for one simple http test.

3 Likes

But then with all these features included it’s hard to see how your claim of “This doesn’t have to be full featured http client” can stay true and where the line of the feature set can be drawn, so eventually this will likely become relatively full-featured and someone will have to commit to maintaining it, when the requests package already fits the bill and is widely adopted.

I think it isn’t unreasonable for students to be taught how to run pip install requests as well.

The latest requests is 232KB in total size on my system.

2 Likes

I usually use urllib where most people would automatically use requests. The only thing I’d say is actually more work in urllib is having to do the whole headers={"Authorization": "Basic " + base64.b64encode(f"{username}:{password}".encode()).decode()} dance to do basic username+password authentication. Other than that, it’s usually the same code just with arguments in slightly different places. The only major pain point is just that every stackoverflow question asking specifically about urllib will be unhelpfully answered with a requests answer.

In fact, if you care about not making bugs or HTTP issues a complete nightmare to track down, urllib is usually more convenient because it throws HTTP errors automatically.

# Honeypot – will silently return usually an empty string if you screw up the URL or something's up with the credentials
requests.get(...).text
# How it should be written
response = requests.get(...)
response.raise_for_status()
response.text

# urlopen is failure-aware by default so is actually less code
with urlopen(...) as response:
    content = response.read().decode()

So I’d support adding a credientials=(username, password) option to Request() and adding some recipes to urllib docs. I don’t think it can do much else without making bad assumptions that make other usages harder. And specificity all the things being proposed that it should do implicitly…

… are things I would not want a library to do.

4 Likes

Agreed. Although it might be useful to have some helpers for common use cases:

def add_json_payload(req, data):
    req.headers["Content-Type"] = "application/json"
    req.data = json.dumps(data).encode("utf-8")

def add_credentials(req, username, password):
    auth = "Basic " + base64.b64encode(f"{username}:{password}".encode()).decode()
    req.headers["Authorization"] = auth

Maybe as you say, just recipes in the docs for most of them. Or maybe someone would feel they are useful enough to publish a small library of such helpers. At which point, if the library is popular, it could be proposed as an addition to the stdlib.

But I don’t think there’s a pressing need. Like you, I routinely use urllib in cases where I don’t want to bother adding a dependency on requests or httpx. And it’s pretty easy to use, and does what I need.

FTR, HTTP/0.9 was actually broken until I recently fixed it (it should now be fixed for 3.13 and later).

1 Like

:rofl:

And nobody noticed. That’s how often it’s actually relevant. In fact, when I made that post, I started out by saying “I mean, until recently, you could still use HTTP 0.9 with most servers”, then did a few spot checks… and… yeah, it still works. But I haven’t actually used it since, uhh… I don’t even know.

If it only were as easy as pip install requests… Teaching about virtual environments and enough of the packaging ecosystem to successfully install packages would take a lot of time that can be spent more productively, especially if the students are not going to be working much with Python specifically after the course.

Speaking of old HTTP versions, I was yesterday years old when I learned most modern HTTP servers will respond with HTTP/1.1 if you make a request with HTTP/1.0.

1 Like

Yeah. I believe that that’s legal, based on RFC 2145 - Use and Interpretation of HTTP Version Numbers which gives the expectation that minor versions are compatible - that is, if you built a client based solely on the HTTP/1.0 specification, and you get back an HTTP/1.1 response, it should behave correctly. Off the top of my head, the only potential incompatibility I can think of would relate to the defaults for connection reuse; but I’d have to see what different servers do when they change version like this.

1 Like

I would use the requests package from PyPI for this use case.
Why does it need to be in the stdlib?

One of the marketing slogans for Python used to be (or still is, I am not sure what is the official party line these days) that it has “batteries included”, which considering NPM left-pad incident is a rather neat idea. Yes, of course, for the specialised uses one can use library for PyPI, but there are huge number of non-trivial Python programs all over the world, which are using just standard library.

Yes, I am mostly using urllib library myself whenever I can, but yes, I was rooting for years for somebody to add urllib.requsests module, which would emulate at least partial support for the Requests-API on top of the standard library ( client.get(URL) and similar).

4 Likes

This is why I wish we could go back to the days when everyone should use virtual environments instead of global installs wasn’t the mantra.

Give a beginner a single Python installation with Scripts/bin in PATH, writeable site-packages and python/pip commands that are actually called python+pip rather than python3.13 and pip3 and even a user with no experience using a terminal can thrive off documented pip install xyz and console-script commands that work as-is independent of shell or platform. They never have to infer that pytest secretly means python3 -m pytest for them. They never have to worry about more complicated environment management until they genuinely need it (which for most any non-Python developer and a good chunk of Python developers will be never).

Give a beginner virtual environments and it’s “Yeah, you can’t just use the install command in the docs. You have to run python -m venv .venv then source ./.venv/bi... ugh hang on, you’re on Windows. Are you using cmd, powershell or bash? You don’t know what those are. Hmm, what color is the window? Black? Great, that doesn’t narrow it down at all. What does the prompt look like? Yeah, copy/paste is weird. You highlight it then… Actually, on second’s thought, try going to your project in file explorer then type cmd into the address bar and press enter. Now its call .venv\Scripts\activate.bat then you can finally run pip install xyz. Congratulations, you installed a single package. Remember to do that call command again whenever you want to use the package”. And that’s assuming they have the guidance to keep them away inappropriate or contradictory or obsolete advice about sudo or --user or pipx or instructions written for different environment/workflow managers.

8 Likes

Fun fact: Most people already have the requests package in their Python installation by default, or at least so after running python -mensurepip.

Try:

from pip._vendor import requests
print(requests.get('http://www.google.com').text)
3 Likes