PEP 691: JSON-based Simple API for Python Package Indexes

It wouldn’t be difficult to update mousebender for this PEP such that there’s a string constant in mousebender.simple which declares an Accept header to send to the server. There would then be a corresponding function that took the bytes from the HTTP response and the value of the Content-Type header field and figured out what to do.

ACCEPT_HEADER_VALUE = "application/vnd.pypi.simple.v1+json, application/vnd.pypi.simple.v1+html, text/html"

def parse_archive_links_response(data: bytes, content_type:str) -> list[mousebender.simple.ArchiveLink]:
    ...

What’s the argument against Accept: application/json (as an alias)?

If version information is included in the body (eg in the version key in the response JSON at top-level), would Content-Type: application/json be possible?

I think the PEP could use a How to Teach This section, even if it’s just to say that the schema and semantics will be documented in the packaging guide.

Even now, there’s nothing inherently stopping us from adding application/json, we already have a content-type that is without version, application/vnd.pypi.simple+json. The main argument against it is just that we don’t have a wide body of historical use already using it like we have for text/html, and the custom content types are more explicit.

In other words, it’s possible to accidentally point something like --index-url at something that doesn’t implement the simple api, and get random errors as the code fails somewhere because it’s getting content it doesn’t expect. With the explicit content-types, if you do that and get back a generic content-type, you’ll typically fail up front (or at least, you have the option to) with an unexpected content-type kind of error.

We already pay that price for text/html, so it’s not the worst thing to add application/json if we think there is value in adding it. If there’s not a benefit from it, then it’s better to not add it to reduce the edge cases. Which up thread I did point out that there is possibly a benefit in that it might allow Github pages to still work as a simple api, since Github pages doesn’t allow arbitrary custom content types to be used.

2 Likes

Another detail with application/json is that we aren’t locked out from adding it in, at some point in the future if we want to – it doesn’t need to happen as a part of this PEP.

Adding things is much easier than removing them. :slight_smile:

Thanks for putting this together!

A few points:

  • The PEP should clarify whether requesting v1 of the API means you are getting v1.x or v1.0, i.e. if you’re gonna have to bump the version number to receive backwards-compatible updates. This isn’t very important on its own, but:

  • The PEP should specify what constitutes a backwards-compatible change in the context of PEP 629. I expect this only to be adding properties to existing JSON objects. Therefore, if a client would like to support all minor versions of the API, it should not error when encountering an unknown property.

  • I suggest being explicit with requesting the latest version of the API, i.e. application/vnd.pypi.simple.latest+json iso application/vnd.pypi.simple+json, as it is rare that consumers would want to do this and would reduce the likelihood of it being done by accident.

  • Considering that the HTML API will never graduate from v1 and the PEP refers to it as being “legacy”, I don’t think that we should be adding content-negotiated versioning or a new mime type for it. Consumers who’d like to continue to use the HTML API can ask for text/html and read the version number from HTML metadata as they have been doing. It’s probably also worth noting that the +html suffix is non-standard, but I don’t know what the practical implications of that is.

I am firmly against having application/json as an alias if the version will be relayed in the subtype for the same reason as:


Well, there isn’t much of a difference if supporting application/json would require updating the schema to include the version number in the response body (unless we’re gonna use the initial underscore “trick”).

I’m one of the authors of this PEP and I’m the default PEP-Delegate for PyPI PEPs, and while I’d like to think I’m perfectly capable of being impartial, I think that it would be best for someone else to take over the PEP-Delegate for PEP 691.

I’ve gone ahead and reached out to @brettcannon, and asked if he would be willing to take on this role again as he did for PEP 629. Brett has graciously agreed, so unless someone has an objection, I’ll update the PEP to switch from myself to Brett.

4 Likes

I’ve gone ahead and updated the PEP based on the feedback so far in this thread. You can see the entire diff on GitHub or read the rendered PEP, but a high-level summary of the changes:

  • Reorganize the content so we introduce things in a more gradual approach, first pulling in versioning, then the JSON serialization, then our content types, then finally tying it all together with content negotiation.
  • Expand a number of the sections with more detail.
  • Add ?format=<content-type> as an optional, alternative mechanism to using HTTP’s conneg.
  • Explicitly mention that the decisions made here enable repositories to do an endpoint per content-type if they desire.
  • Add a recommendations section, to guide implementors on the best “default” choices.
  • Add FAQ entries for a number of questions:
    • Implications for static file servers
    • Does this mean that PyPI is dropping support for PEP 503.
    • Why TUF doesn’t force us to have a single URL per TUF target.
    • Why we haven’t added application/json in addition to text/html.
  • Support the PEP 629 metadata inside of the JSON responses and provide an area for any future data of that nature.
    • This required moving the /simple/ response into a sub key rather than a top level dictionary.
  • Rename the dist-info-metadata-available field to dist-info-metadata.
    • Recommend that hashes for dist-info-metadata are provided if possible.
  • Change the semantics of dist-info-metadata and yanked slightly to make them match gpg-sig better and to make it easier to use them (remove the presence of the key being significant, make the value itself communicate all of the information).
  • Update the PEP-Delegate to be @brettcannon.
3 Likes

I do want to call out one recommendation that I’m not sure of:

When encountering an Accept header that does not contain any content types that it knows how to work with, should not ever return a 300 Multiple Choice response, and it should be preferred to return a 406 Not Acceptable response.

Specifically, the part about returning a 406 Not Acceptable, and specifically in the case where an Accept header was not sent at all. I think that in the case someone sends an Accept header, but it doesn’t contain anything you can serve, returning a 406 Not Acceptable is the best option so that you can get an obvious up front error.

However, when a request doesn’t contain an Accept header is where I’m a little unsure. The benefit to sending a 406 Not Acceptable in that case is it will guide clients to be explicit about the content types that they can work with, which makes everything better in the future when new content types may be added as nobody will be relying on the implicit default of that particular repository. It also just makes the client more reliable when moving between different servers that support different things.

The downside is that I don’t know how many tools or scripts out there today are not using an Accept header currently and would be broken by that recommendation. I know that pip does send an Accept header and would not be broken. I’m unsure about anything else.

This is basically a tradeoff between being maximally compatible with existing clients and guiding clients towards making choices that will make themselves and the ecosystem more reliable in general. Currently the PEPs recommendations for this specific thing leans towards long term reliability, at the expense of possibly breaking some clients and forcing them to add an Accept header.

Ultimately, this is all just recommendations, so repositories would be free to do what they felt best anyways, so it’s not the end of the world either way. It would be nice though if our recommendations were the best they could be, and I’m on the fence about this particular one.

NOTE: This does NOT affect browsers.

The example in the PEP for determining the content type uses the cgi module, which has been deprecated by PEP 594 [1]. I’d change the example to use email.message.Message instead as recommended in that PEP.

[1] PEP 594 – Removing dead batteries from the standard library | peps.python.org


Could Appendix 1 be expanded to cover other providers (and consumers?) of the current APIs? The ones that spring to mind are:

I’d be interested to know what they think about this PEP and whether they would implement it.

2 Likes

I don’t have time right this moment to look at the other comment, I just wanted to say that I knew the cgi module was being deprecated, but I used this method anyways because the way of doing it with the email module is somewhat noisier, and I felt it distracted from the actual meat and potatoes of the example code that showed the overall flow.

If folks really think it’s super useful to have

import email.message

def parse_header(header):
    m = email.message.Message()
    m["content-type"] = header
    ct, *params_raw = m.get_params()
    return ct[0], dict(params_raw)

At the top of the code, I can add it. It just felt like noise to me.

FYI I’m going to wait until I’m told the PEP is done and ready on my feedback before I dive back into it.

2 Likes

Does anyone have any other thoughts? Concerns? Anything :slight_smile: I think we’ve covered most of the concerns people have had, but if we hadn’t, I’d love to figure them out and get them handled.

3 Likes

I don’t want to sound too pessimistic (the work on this very much appreciated), but some push back can be healthy. Although I agree with taking an incremental approach, I’m not sure if this is really a step forward. The introduced complexity of this PEP has a cost, and if the reward is not significant, I don’t think it is worth it.

My main question(s) would be: for whom is this PEP, who will benefit from this? and when? (now, nearby future, or far future)

Although JSON in general might be nicer to parse, in it’s current state the HTML Simple API parsing is much easier to implement than the JSON will be behind content-negotiation - with all the possible alternative responses/errors - as proposed in this PEP. And the complex logic of HTML as stated under “Abstract” is not really present in the Simple API ( html.parser works fine).

I also get that this is maybe not about an improvement for today, but instead an intermediate step for future improvements. But in that case, I don’t think this PEP is laying out a solid foundation for that.

In my opinion the real value from this PEP will manifest after tools (client and server) will drop the HTML support. And before that eventually happens, a better solution should already be around.

This is a lot of complaining from my side, without providing any solutions. After thinking about it for a while about this, I can’t think of great alternatives, at least not without (partly) dropping the zero-configuration requirement.

Which lead me to the think, maybe we shouldn’t continue with this PEP at all…

3 Likes

It’s fine-ish. As someone who has implemented code to handle the HTML-based Simple API, html.parser is not exactly a robust parser. It’s fine for simple things, but there’s no guarantee it will succeed on valid HTML.

Plus it’s way easier to find libraries to consume JSON than HTML in other languages these days (and that is important for tooling purposes).

That’s typically not how we evolve standards because it makes switching harder. By making only the parsing step different but the overall data model the same, it makes this more of a change at the edges of your code rather than at the logic level (e.g. it’s more like encoding/decoding strings with this PEP than switching to integers for everything).

3 Likes

Of course! I welcome people to pick apart these proposals :slight_smile:

In the very short term, I suspect nobody will benefit since the very short term will be all cost (the cost of having to implement this thing) and no benefit (it’s expected that everyone will continue to maintain their existing HTML parsing solution, and the data will be largely 1:1).

In the longer term, we have a couple of benefits:

  • People implementing repositories and clients that implement this API can, on their own time schedule, start dropping support for HTML responses. The expectation is that PyPI and pip will likely maintain theirs for quite some time just due to their positions within the ecosystem, but projects without those constraints are enabled to be much more aggressive in dropping support.
    • This includes brand new projects, whom may decide not to ever implement the HTML content type at all.
  • It unlocks the ability to start adding new features that are no longer constrained by the limitations of HTML.

There are a couple of things here that I don’t agree with.

The first is that I don’t think content negotiation is actually harder than the current situation. Content negotiation is a foundational part of how HTTP works, and every client has to be prepared to cope with it in every request.

To expand on that, there is not actually a way to make HTTP requests that don’t, at their core, boil down to content negotiation. So currently when you make an HTTP request to a simple API you can either in include an Accept: text/html header or not.

If you do not, then the server is, by nature of HTTP, welcome to choose any content type it wants, or return an error. If you do send an Accept header, again the server is free to use that information in guiding what it will return, or it can ignore it and return whatever it wants if it doesn’t support that.

The important bit here, is this is fundamentally just content negotiation, whether you’re not including the Accept header (which tells the server that you’re happy with whatever representation the server gives you) or whether you are (which tells the server you prefer text/html).

In both cases, you may not get the content type that you expect, there is no way in HTTP to mandate that you only get the correct content type, and you have to be ready to cope, in some fashion, with the fact that you may not get the content type you expect. Now granted, in practice most servers will return the content type that you expect, and in the cases they don’t, you can just assume they did and at some point you could will hit a point where the assumptions it made about the response content don’t hold and you’ll get some random error.

But that’s all mostly true with this PEP too, you can just assume that the server sent you the content type you expected.

You can also just not send an Accept header at all, and assume the server will send you something that you expect, which matches the simplest possible implementation for the client today, the only difference is that there is a greater chance that the server won’t be sending you what you expect (since previously it should have only returned text/html, but now it could return other content types as well), so it’s really recommended that you at least include an Accept header.

I will go back to my example code, here’s the absolute simplest code that will more or less reliably do what you want in most cases with the existing API:

import requests

resp = requests.get("https://pypi.org/simple/")
resp.raise_for_status()

data = parse_html(resp)

Here’s the same absolute simplest code with the changes in the PEP, assuming that you’re handling the most complex case possible, of supporting both HTML and JSON:

import requests

resp = requests.get(
    "https://pypi.org/simple/",
    headers={"Accept": "application/vnd.pypi.simple.v1+json, application/vnd.pypi.simple.v1+html, text/html"},
)
resp.raise_for_status()

if "application/vnd.pypi.simple.v1+json" in resp.headers.get("content-type", ""):
    data = parse_json(resp)
else:
    data = parse_html(resp)

This isn’t as robust as the example code in the PEP, but it’s as robust as the existing code was (it’s actually technically slightly more robust!). It makes an HTTP request, then assumes that the content type is something it understands, and if not it will error out at some point.

But if you look at these two things, the additional complexity caused by content negotiation is… an extra dictionary being passed to requests.get(), and an extra conditional on the response. That’s not hardly what I’d call a lot of extra complexity, and in fact, that actually matches what pip itself does today (other than the addition of the application/vnd.pypi.simple.* types, and it’s conditional just raises an error if it’s not text/html).

On the server side there is some additional complexity in parsing and selecting the content type that you’re going to respond with, but all of the major web frameworks that I could find support it, some of the static file servers support it (some don’t).

The other statement here is that the complex logic of HTML isn’t present in the simple API, but that’s not actually true IMO, because of these two lines from PEP 503:

URL must respond with a valid HTML5 page
There may be any other HTML elements on the API pages as long as the required anchor elements exist.

That means that a fully PEP 503 conformant client MUST be prepared to accept a response body that contains literally any valid HTML5 content, regardless of what that content is. Now in practice it’s highly unusual to put something in your simple response that html.parser can’t parse, so you can most likely get away with ignoring that requirement of the PEP without any ill effect, but doing so means that you’re deviating from the PEP.

Here again, I don’t agree with this conclusion.

I think this does represent an intermediate step for future improvements, because a major blocker to improvements right now is trying to fit things into capabilities of HTML. For example, something we would like to do is add all of the dependencies for a project in response, but there isn’t really a good way to serialize a list of data into an HTML attribute besides doing something like embedding JSON inside of an HTML attribute.

An important aspect of this PEP is in this line:

Future versions of the API may add things that can only be represented in a subset of the available serializations of that version.

This gives us full permission to effectively freeze the HTML API in place, never adding another feature to it, while we start adding new features to the JSON API, freeing us from having to worry about how we can encode something that we want to add into HTML.

Certainly, some of the value in this PEP will not manifest itself until after clients or repositories start dropping support for HTML, though even in the interim, it makes things like “just” using html.parser a little more palatable. Though as mentioned above this PEP does allow us to start improving the API with new features right away.

I do want to challenge the idea of “a better solution should already be around”. I don’t think that the data model of the simple API is actually a problem for its intended use case, and I think it serves it well. There are things that we would like to add that are tough to express in HTML, but I think the fundamental shape of the data is… fine?

I don’t really see us needing to replace this API in the future unless the state of the art drastically changes in some way that I don’t think it’s possible for us to see right now.

Certainly, this API isn’t well structured for a general purpose API to interact with PyPI, but that’s not it’s goal and never should be. The amount of traffic we get for this API is massive, and it deserves to have an API that is specialized for it’s use cases, and a general purpose API will never be that.

6 Likes

Just for kicks, here’s an implementation of this for Warehouse that should be fully featured, not able tobe landed since it needs tests and such, but manual testing has it working fine: Implement a PoC for PEP 691 by dstufft · Pull Request #11485 · pypa/warehouse · GitHub.

Might try to throw something together for pip as well here in a bit.

3 Likes

Here’s the same thing for pip.

2 Likes

Ok, and I tested both of these locally, both with Warehouse serving both, and with Warehouse commenting out its support for HTML all together. My Warehouse isn’t setup to serve files so fetching files 404’d, but it got to that point just fine.

Most of the changed lines in the pip PR are just removing the word “html”, maybe I should have left them to make it more obvious what the actual required changes are.

2 Likes

I’ve also got my proxy index working with manual tests: Comparing master...json-api · EpicWink/proxpi · GitHub


Any users who use Curl without explicitly setting Accept will likely start getting JSON responses and breaking their scripts, due to Curl setting Accept: */* by default. The solution to this would be to require JSON to be chosen only if its quality is strictly greater than HTML, but then Accept: ...+json, ...+html (ie without setting quality) will always return HTML.


I did a benchmark of the response body size difference between HTML and JSON APIs. On average, the JSON response with 1.91x as large (ie 91% bigger).

Individual packages (click to expand)
Project HTML size (kB) JSON size (kB) JSON size ratio
babel 8.5 18.4 2.16
cython 498.7 883.0 1.77
flask 11.7 25.0 2.14
gitpython 25.3 50.7 2.0
jinja2 14.2 30.7 2.16
keras-preprocessing 4.3 8.7 2.02
mako 10.1 22.8 2.26
markdown 14.1 31.0 2.2
markupsafe 85.7 153.5 1.79
pillow 478.8 924.8 1.93
pyjwt 16.7 38.3 2.29
pyopengl 9.7 22.5 2.32
pyopengl-accelerate 26.0 52.9 2.03
pyqt5 27.7 51.9 1.87
pyqt5-qt5 0.8 1.5 1.88
pyqt5-sip 51.1 98.4 1.93
pywavelets 68.4 129.6 1.89
pyyaml 69.0 133.0 1.93
pygments 24.4 53.9 2.21
qtpy 10.2 22.1 2.17
sqlalchemy 504.1 842.8 1.67
send2trash 4.0 8.7 2.17
shapely 132.2 259.8 1.97
sphinx 62.9 138.2 2.2
werkzeug 22.1 46.8 2.12
absl-py 6.3 14.5 2.3
alabaster 4.9 11.1 2.27
alembic 22.0 45.1 2.05
argon2-cffi 39.5 79.5 2.01
astunparse 3.1 6.8 2.19
attrs 8.6 17.5 2.03
azure-common 10.2 22.2 2.18
azure-core 13.7 29.8 2.18
azure-cosmos 8.4 17.9 2.13
azure-identity 13.6 28.7 2.11
azure-keyvault-secrets 5.2 10.4 2.0
azure-storage-blob 15.2 31.2 2.05
backcall 0.7 1.4 2.0
bleach 14.5 30.1 2.08
boto3 346.4 745.5 2.15
botocore 460.8 986.6 2.14
build 7.2 13.8 1.92
cachetools 12.0 25.8 2.15
certifi 13.3 29.0 2.18
cffi 231.6 483.1 2.09
charset-normalizer 12.1 23.2 1.92
click 14.9 32.8 2.2
cloudpickle 11.4 24.4 2.14
colorama 12.1 27.9 2.31
coverage 533.9 984.8 1.84
cryptography 353.3 672.5 1.9
cycler 1.1 2.1 1.91
databricks-cli 13.0 28.2 2.17
debugpy 246.0 421.3 1.71
decorator 10.5 22.2 2.11
defusedxml 4.2 8.2 1.95
deprecation 4.1 8.9 2.17
docker 18.2 37.5 2.06
docutils 10.0 20.0 2.0
entrypoints 2.0 3.8 1.9
flaky 8.3 18.2 2.19
flatbuffers 1.7 3.5 2.06
floto 0.1 0.1 1.0
gast 4.6 9.4 2.04
gitdb 4.4 9.3 2.11
glfw 40.5 75.6 1.87
google-auth 45.5 85.9 1.89
google-auth-oauthlib 5.4 10.6 1.96
google-pasta 4.8 10.4 2.17
greenlet 159.6 301.4 1.89
grpcio 852.9 1705.8 2.0
gunicorn 16.0 35.6 2.23
h5py 80.5 155.1 1.93
idna 6.2 13.8 2.23
imageio 18.8 40.1 2.13
imagesize 3.0 5.9 1.97
imgaug 2.4 5.3 2.21
imgviz 10.9 25.0 2.29
importlib-metadata 38.0 70.3 1.85
importlib-resources 20.6 37.9 1.84
iniconfig 1.3 2.7 2.08
ipykernel 32.2 66.4 2.06
ipyparallel 15.0 30.5 2.03
ipython 54.8 119.1 2.17
ipython-genutils 0.9 1.8 2.0
ipywidgets 36.5 79.5 2.18
isodate 2.6 5.9 2.27
itsdangerous 6.0 12.4 2.07
jedi 10.0 19.9 1.99
jmespath 5.4 11.9 2.2
joblib 28.0 63.9 2.28
jsonschema 18.1 39.3 2.17
jupyter-client 22.3 44.0 1.97
jupyter-core 13.0 26.4 2.03
jupyterlab-pygments 2.4 4.5 1.88
jupyterlab-widgets 16.1 32.0 1.99
keras 12.9 29.6 2.29
kiwisolver 75.8 133.9 1.77
labelme 24.1 57.4 2.38
libclang 7.1 13.8 1.94
majora 1.2 2.2 1.83
marshmallow 54.5 116.0 2.13
marshmallow-dataclass 18.3 36.4 1.99
marshmallow-oneofschema 5.0 9.6 1.92
marshmallow-union 1.9 3.7 1.95
matplotlib 241.3 445.6 1.85
matplotlib-inline 1.7 3.1 1.82
mistune 14.8 29.3 1.98
mlflow 18.7 40.3 2.16
msal 10.8 24.8 2.3
msal-extensions 3.4 7.1 2.09
msrest 22.3 50.7 2.27
mypy-extensions 1.7 3.5 2.06
nbclient 10.6 20.7 1.95
nbconvert 19.3 38.9 2.02
nbformat 8.1 16.5 2.04
nest-asyncio 11.6 22.9 1.97
networkx 30.0 67.8 2.26
nose 2.8 6.5 2.32
notebook 29.5 63.6 2.16
numpy 504.2 907.6 1.8
oauthlib 8.6 19.1 2.22
opencv-python 285.8 518.4 1.81
opencv-python-headless 242.7 428.2 1.76
opt-einsum 3.0 6.3 2.1
packaging 14.3 28.7 2.01
pandas 272.8 516.2 1.89
pandocfilters 2.6 5.6 2.15
parso 8.5 18.2 2.14
pep517 4.2 9.4 2.24
pexpect 4.0 9.0 2.25
pickleshare 3.3 7.2 2.18
pip 36.6 76.4 2.09
pluggy 7.0 13.6 1.94
portalocker 8.6 18.9 2.2
prometheus-client 9.4 19.2 2.04
prometheus-flask-exporter 12.1 25.0 2.07
prompt-toolkit 42.6 90.3 2.12
protobuf 315.6 614.4 1.95
psutil 198.9 408.2 2.05
ptyprocess 2.2 4.8 2.18
py 12.9 28.7 2.22
pyasn1 46.8 109.6 2.34
pyasn1-modules 40.0 88.4 2.21
pycocotools 1.0 2.1 2.1
pycparser 3.4 7.7 2.26
pymap3d 11.9 25.6 2.15
pyparsing 43.3 93.8 2.17
pyproj 123.2 230.2 1.87
pyrsistent 24.3 48.6 2.0
pytest 49.1 100.2 2.04
pytest-cov 11.9 24.1 2.03
python-dateutil 9.2 18.4 2.0
python-editor 2.4 5.1 2.12
python-json-logger 4.2 8.6 2.05
pytz 95.0 228.2 2.4
pyzmq 291.6 556.2 1.91
qtconsole 12.8 27.7 2.16
querystring-parser 1.4 2.8 2.0
requests 32.6 71.5 2.19
requests-oauthlib 6.4 13.1 2.05
rsa 11.2 25.5 2.28
s3transfer 10.2 22.2 2.18
scikit-image 105.4 185.9 1.76
scikit-learn 243.3 448.9 1.85
scipy 286.8 516.0 1.8
sentry-sdk 46.1 101.6 2.2
setuptools 208.1 441.1 2.12
six 7.0 15.6 2.23
sklearn 0.3 0.4 1.33
smmap 3.8 7.7 2.03
snowballstemmer 2.3 4.9 2.13
sphinxcontrib-applehelp 1.3 2.3 1.77
sphinxcontrib-devhelp 1.3 2.3 1.77
sphinxcontrib-htmlhelp 2.0 3.8 1.9
sphinxcontrib-jsmath 1.0 1.6 1.6
sphinxcontrib-qthelp 1.6 3.0 1.88
sphinxcontrib-serializinghtml 2.6 4.7 1.81
sqlparse 6.3 13.7 2.17
tabulate 4.5 10.2 2.27
tensorboard 21.2 40.0 1.89
tensorboard-data-server 5.1 8.9 1.75
tensorboard-plugin-wit 1.5 2.7 1.8
tensorflow 171.6 338.0 1.97
tensorflow-estimator 6.9 13.4 1.94
tensorflow-io-gcs-filesystem 40.3 65.5 1.63
termcolor 1.1 2.3 2.09
terminado 9.3 19.0 2.04
testpath 3.1 6.6 2.13
threadpoolctl 2.8 5.2 1.86
tifffile 35.4 72.8 2.06
toml 3.1 7.0 2.26
tomli 8.0 16.4 2.05
tornado 49.6 97.0 1.96
tqdm 49.4 104.1 2.11
traitlets 11.4 23.9 2.1
typeguard 15.1 31.3 2.07
typing-extensions 8.2 16.8 2.05
typing-inspect 4.0 8.4 2.1
urllib3 21.7 43.4 2.0
wcwidth 4.4 9.8 2.23
webencodings 1.1 2.4 2.18
websocket-client 17.5 36.5 2.09
wheel 19.4 40.6 2.09
widgetsnbextension 38.5 79.9 2.08
wrapt 182.5 308.7 1.69
xmltodict 4.6 10.3 2.24
zipp 12.1 24.9 2.06

Edit: bad benchmark, see below comments and for correct run