PEP 691: JSON-based Simple API for Python Package Indexes

The way I’ve implemented this in Warehouse, is it essentially starts with a priority list that is hard coded in Warehouse, it then takes the list from the client, and effectively sorts the list using the priority values. Then it takes the first item.

This ends up working because when you’re using a stable sort, items with equal preference will retain their ordering, so this ends up letting clients express their priority, but within the same priority level, the server’s initial priority ends up controlling the outcome.

For compatibility reasons, Warehouse prefers text/html over +html over +json, absent any signal from the client that it prefers JSON over HTML. I would like to have the server itself prefer JSON over HTML, but I believe the chances of breakages are much higher in that situation, but it might be a good intermediate step at some point in the future if we ever decide we want to more directly push people towards JSON.

Does this take into account any compression? Or is it the decompressed size?

Oh I see:

This is what your HTML output looks like per file:

    <a href="nose-1.3.7.tar.gz#sha256=f1bffef9cbc82628f6e7d7b40d7e255aefaa1adb6a1b1d26c69a8b79e6208a98">nose-1.3.7.tar.gz</a><br />

(plus a new line)

This is what it looks like for JSON:

{"filename":"nose-1.3.7.tar.gz","hashes":{"sha256":"f1bffef9cbc82628f6e7d7b40d7e255aefaa1adb6a1b1d26c69a8b79e6208a98"},"url":"https://files.pythonhosted.org/packages/58/a5/0dc93c3ec33f4e281849523a5a913fa1eea9a3068acfa754d44d88107a44/nose-1.3.7.tar.gz#sha256=f1bffef9cbc82628f6e7d7b40d7e255aefaa1adb6a1b1d26c69a8b79e6208a98"}

That’s 132 bytes per file for HTML vs 324 bytes per file for JSON (for nose-1.3.7.tar.gz) or a 2.45 ratio.

You should be able to drop the #sha256=... from the url, that’s not required in JSON, that’s what the hashes key is for. That should save 72 bytes per file.

That takes us to 132:252, or 1.9 ratio.

The URLs are another big difference, nose-1.3.7.tar.gz vs https://files.pythonhosted.org/packages/58/a5/0dc93c3ec33f4e281849523a5a913fa1eea9a3068acfa754d44d88107a44/nose-1.3.7.tar.gz is an extra 107 bytes per file (and I think it may be a bug in the PR, I assume you want to serve the file locally for caching?).

The PEP does specify that URLs are to be interpreted as it would for HTML, which allows relative URLs to work, so the same URL should work for both.

If we remove the 107 bytes, that brings us to 132:145, or 1.1 ratio, which the remaining 13 bytes per file difference is going to largely be noise between having to specify “filename” and “hashes” as a key and not having spaces and newlines. Compression should erase most of that.

Yup, thanks for pointing that out. I’ll fix the response and re-run the benchmark. This is why you don’t code at 3am


Warehouse’s preference for text/html when equal is what I’m talking about: if a client requests Accept: text/html, ...+json the server will always respond with HTML. With Warehouse’s ordering, clients must either specify quality or both not specify text/html and specify ...+json before `…+html. Not to mention content negotiation doesn’t seem to care about order.

I would prefer if the PEP said to default to assuming text/html when qualities are equal (and nonzero), and that clients should always set quality.


By latest version, I’m assuming that that means the latest version the server knows about?

We currently leave it up to each server to decide what to do. This was to give each implementor the most flexibility to decide what makes the most sense for them.

We pick the most compatible possible option in Warehouse because there is only one version of Warehouse, so people can’t select different behaviors by different versions. I think it’s fine for other implementations to do something different.

Yes.

2 Likes

Fixed benchmark result: JSON response is 1.05x (± 0.04) as large (ie 5% bigger)

Individual packages (click to expand)
Project HTML size (kB) JSON size (kB) JSON size ratio
babel 8.5 9.1 1.07
cython 498.7 518.7 1.04
flask 11.7 12.4 1.06
gitpython 25.3 26.6 1.05
jinja2 14.2 15.2 1.07
keras-preprocessing 4.3 4.5 1.05
mako 10.1 10.9 1.08
markdown 14.1 15.1 1.07
markupsafe 85.7 88.6 1.03
pillow 478.8 505.2 1.06
pyjwt 16.7 18.2 1.09
pyopengl 9.7 10.6 1.09
pyopengl-accelerate 26.0 27.9 1.07
pyqt5 27.7 29.2 1.05
pyqt5-qt5 0.8 0.8 1.0
pyqt5-sip 51.1 53.6 1.05
pywavelets 68.4 72.3 1.06
pyyaml 69.0 72.7 1.05
pygments 24.4 26.3 1.08
qtpy 10.2 10.9 1.07
sqlalchemy 504.1 519.4 1.03
send2trash 4.0 4.3 1.07
shapely 132.2 140.4 1.06
sphinx 62.9 67.5 1.07
werkzeug 22.1 23.6 1.07
absl-py 6.7 7.2 1.07
alabaster 4.9 5.3 1.08
alembic 22.0 23.3 1.06
argon2-cffi 39.5 42.2 1.07
astunparse 3.1 3.3 1.06
attrs 8.6 9.1 1.06
azure-common 10.2 11.0 1.08
azure-core 14.1 15.1 1.07
azure-cosmos 8.4 9.0 1.07
azure-identity 13.6 14.6 1.07
azure-keyvault-secrets 5.2 5.5 1.06
azure-storage-blob 15.2 16.2 1.07
backcall 0.7 0.7 1.0
bleach 14.5 15.4 1.06
boto3 346.7 372.2 1.07
botocore 461.0 495.6 1.08
build 7.2 7.5 1.04
cachetools 12.0 12.8 1.07
certifi 13.3 14.3 1.08
cffi 231.6 249.8 1.08
charset-normalizer 12.1 12.5 1.03
click 14.9 16.0 1.07
cloudpickle 11.4 12.2 1.07
colorama 12.1 13.2 1.09
coverage 533.9 555.8 1.04
cryptography 353.3 372.7 1.05
cycler 1.1 1.1 1.0
databricks-cli 13.0 14.1 1.08
debugpy 246.0 253.4 1.03
decorator 10.5 11.2 1.07
defusedxml 4.2 4.4 1.05
deprecation 4.1 4.4 1.07
docker 18.2 19.3 1.06
docutils 10.0 10.5 1.05
entrypoints 2.0 2.0 1.0
flaky 8.3 8.9 1.07
flatbuffers 1.7 1.7 1.0
floto 0.1 0.1 1.0
gast 4.6 4.8 1.04
gitdb 4.4 4.6 1.05
glfw 40.5 42.9 1.06
google-auth 45.5 47.5 1.04
google-auth-oauthlib 5.4 5.7 1.06
google-pasta 4.8 5.2 1.08
greenlet 159.6 168.0 1.05
grpcio 852.9 910.8 1.07
gunicorn 16.0 17.2 1.07
h5py 80.5 85.2 1.06
idna 6.2 6.6 1.06
imageio 18.8 20.0 1.06
imagesize 3.0 3.1 1.03
imgaug 2.4 2.5 1.04
imgviz 10.9 11.7 1.07
importlib-metadata 38.0 39.3 1.03
importlib-resources 20.6 21.3 1.03
iniconfig 1.3 1.3 1.0
ipykernel 32.2 33.9 1.05
ipyparallel 15.0 15.8 1.05
ipython 54.8 58.7 1.07
ipython-genutils 0.9 0.9 1.0
ipywidgets 36.5 39.5 1.08
isodate 2.6 2.8 1.08
itsdangerous 6.0 6.3 1.05
jedi 10.0 10.5 1.05
jmespath 5.4 5.7 1.06
joblib 28.0 30.5 1.09
jsonschema 18.4 19.8 1.08
jupyter-client 22.3 23.4 1.05
jupyter-core 13.0 13.7 1.05
jupyterlab-pygments 2.4 2.4 1.0
jupyterlab-widgets 16.1 17.0 1.06
keras 12.9 14.0 1.09
kiwisolver 75.8 78.3 1.03
labelme 24.1 26.5 1.1
libclang 7.1 7.5 1.06
majora 1.2 1.1 0.92
marshmallow 54.5 58.5 1.07
marshmallow-dataclass 18.3 19.3 1.05
marshmallow-oneofschema 5.0 5.2 1.04
marshmallow-union 1.9 2.0 1.05
matplotlib 241.3 252.6 1.05
matplotlib-inline 1.7 1.7 1.0
mistune 14.8 15.8 1.07
mlflow 18.7 19.9 1.06
msal 10.8 11.7 1.08
msal-extensions 3.4 3.6 1.06
msrest 22.3 24.3 1.09
mypy-extensions 1.7 1.8 1.06
nbclient 10.6 10.9 1.03
nbconvert 19.3 20.2 1.05
nbformat 8.1 8.5 1.05
nest-asyncio 11.6 12.1 1.04
networkx 30.0 32.5 1.08
nose 2.8 3.0 1.07
notebook 29.5 31.6 1.07
numpy 504.2 522.9 1.04
oauthlib 8.6 9.3 1.08
opencv-python 285.8 299.2 1.05
opencv-python-headless 242.7 253.2 1.04
opt-einsum 3.0 3.2 1.07
packaging 14.3 15.1 1.06
pandas 272.8 286.3 1.05
pandocfilters 2.6 2.8 1.08
parso 8.5 9.1 1.07
pep517 4.2 4.5 1.07
pexpect 4.0 4.3 1.07
pickleshare 3.3 3.5 1.06
pip 36.6 38.8 1.06
pluggy 7.0 7.3 1.04
portalocker 8.6 9.3 1.08
prometheus-client 9.4 9.9 1.05
prometheus-flask-exporter 12.1 13.0 1.07
prompt-toolkit 42.6 45.7 1.07
protobuf 315.6 335.2 1.06
psutil 198.9 212.5 1.07
ptyprocess 2.2 2.3 1.05
py 12.9 13.9 1.08
pyasn1 46.8 51.3 1.1
pyasn1-modules 40.0 43.5 1.09
pycocotools 1.0 1.0 1.0
pycparser 3.4 3.7 1.09
pymap3d 11.9 12.6 1.06
pyparsing 43.3 46.6 1.08
pyproj 123.2 128.8 1.05
pyrsistent 24.3 25.7 1.06
pytest 49.1 51.8 1.05
pytest-cov 11.9 12.6 1.06
python-dateutil 9.2 9.7 1.05
python-editor 2.4 2.5 1.04
python-json-logger 4.2 4.4 1.05
pytz 95.0 104.6 1.1
pyzmq 291.6 305.7 1.05
qtconsole 12.8 13.7 1.07
querystring-parser 1.4 1.4 1.0
requests 32.6 35.2 1.08
requests-oauthlib 6.4 6.8 1.06
rsa 11.2 12.1 1.08
s3transfer 10.2 11.0 1.08
scikit-image 105.4 109.3 1.04
scikit-learn 243.3 255.2 1.05
scipy 286.8 296.9 1.04
sentry-sdk 46.1 50.0 1.08
setuptools 208.1 222.3 1.07
six 7.0 7.5 1.07
sklearn 0.3 0.2 0.67
smmap 3.8 4.0 1.05
snowballstemmer 2.3 2.5 1.09
sphinxcontrib-applehelp 1.3 1.3 1.0
sphinxcontrib-devhelp 1.3 1.3 1.0
sphinxcontrib-htmlhelp 2.0 2.1 1.05
sphinxcontrib-jsmath 1.0 0.9 0.9
sphinxcontrib-qthelp 1.6 1.6 1.0
sphinxcontrib-serializinghtml 2.6 2.6 1.0
sqlparse 6.3 6.7 1.06
tabulate 4.5 4.8 1.07
tensorboard 21.2 22.2 1.05
tensorboard-data-server 5.1 5.2 1.02
tensorboard-plugin-wit 1.5 1.5 1.0
tensorflow 171.6 183.3 1.07
tensorflow-estimator 6.9 7.3 1.06
tensorflow-io-gcs-filesystem 40.3 40.9 1.01
termcolor 1.1 1.1 1.0
terminado 9.3 9.7 1.04
testpath 3.1 3.3 1.06
threadpoolctl 2.8 2.8 1.0
tifffile 35.4 37.3 1.05
toml 3.1 3.3 1.06
tomli 8.0 8.3 1.04
tornado 49.6 51.9 1.05
tqdm 49.4 52.6 1.06
traitlets 11.4 12.1 1.06
typeguard 15.1 15.9 1.05
typing-extensions 8.2 8.7 1.06
typing-inspect 4.0 4.2 1.05
urllib3 21.7 22.8 1.05
wcwidth 4.4 4.7 1.07
webencodings 1.1 1.2 1.09
websocket-client 17.5 18.7 1.07
wheel 19.4 20.7 1.07
widgetsnbextension 38.5 41.4 1.08
wrapt 182.5 187.8 1.03
xmltodict 4.6 4.9 1.07
zipp 12.1 12.6 1.04

Benchmark with (gzip) compression result: JSON response is 0.97x (± 0.05) as large (ie 3% smaller)

Individual packages (click to expand)
Project HTML size (kB) JSON size (kB) JSON size ratio
babel 2.7 2.6 0.96
cython 98.1 97.9 1.0
flask 3.5 3.5 1.0
gitpython 6.9 6.9 1.0
jinja2 4.4 4.3 0.98
keras-preprocessing 1.3 1.3 1.0
mako 3.4 3.3 0.97
markdown 4.5 4.4 0.98
markupsafe 17.9 17.8 0.99
pillow 112.6 112.3 1.0
pyjwt 5.5 5.4 0.98
pyopengl 3.3 3.3 1.0
pyopengl-accelerate 6.9 6.8 0.99
pyqt5 6.5 6.5 1.0
pyqt5-qt5 0.4 0.4 1.0
pyqt5-sip 12.2 12.1 0.99
pywavelets 16.0 15.9 0.99
pyyaml 16.6 16.5 0.99
pygments 7.5 7.5 1.0
qtpy 3.2 3.1 0.97
sqlalchemy 86.9 86.7 1.0
send2trash 1.4 1.3 0.93
shapely 32.6 32.5 1.0
sphinx 18.9 18.8 0.99
werkzeug 6.4 6.3 0.98
absl-py 2.3 2.3 1.0
alabaster 1.7 1.6 0.94
alembic 6.1 6.0 0.98
argon2-cffi 10.4 10.3 0.99
astunparse 1.1 1.1 1.0
attrs 2.5 2.4 0.96
azure-common 3.1 3.1 1.0
azure-core 4.3 4.3 1.0
azure-cosmos 2.6 2.5 0.96
azure-identity 4.0 3.9 0.97
azure-keyvault-secrets 1.5 1.5 1.0
azure-storage-blob 4.3 4.2 0.98
backcall 0.4 0.3 0.75
bleach 4.1 4.1 1.0
boto3 97.8 97.8 1.0
botocore 128.2 128.6 1.0
build 2.0 1.9 0.95
cachetools 3.6 3.5 0.97
certifi 4.2 4.1 0.98
cffi 62.2 62.1 1.0
charset-normalizer 3.1 3.0 0.97
click 4.7 4.6 0.98
cloudpickle 3.4 3.4 1.0
colorama 4.1 4.1 1.0
coverage 114.8 114.5 1.0
cryptography 80.2 79.9 1.0
cycler 0.5 0.4 0.8
databricks-cli 3.9 3.9 1.0
debugpy 45.3 45.1 1.0
decorator 3.2 3.2 1.0
defusedxml 1.3 1.2 0.92
deprecation 1.4 1.3 0.93
docker 5.0 5.0 1.0
docutils 2.7 2.7 1.0
entrypoints 0.7 0.6 0.86
flaky 2.7 2.6 0.96
flatbuffers 0.6 0.6 1.0
floto 0.1 0.1 1.0
gast 1.4 1.4 1.0
gitdb 1.5 1.4 0.93
glfw 9.1 9.0 0.99
google-auth 10.6 10.6 1.0
google-auth-oauthlib 1.5 1.5 1.0
google-pasta 1.6 1.5 0.94
greenlet 36.2 36.1 1.0
grpcio 210.6 209.9 1.0
gunicorn 5.0 5.0 1.0
h5py 19.2 19.1 0.99
idna 2.1 2.1 1.0
imageio 5.7 5.7 1.0
imagesize 1.0 0.9 0.9
imgaug 0.9 0.9 1.0
imgviz 3.6 3.6 1.0
importlib-metadata 8.6 8.5 0.99
importlib-resources 4.7 4.6 0.98
iniconfig 0.6 0.5 0.83
ipykernel 8.8 8.8 1.0
ipyparallel 4.2 4.1 0.98
ipython 16.4 16.3 0.99
ipython-genutils 0.4 0.4 1.0
ipywidgets 10.8 10.7 0.99
isodate 1.0 0.9 0.9
itsdangerous 1.9 1.8 0.95
jedi 2.8 2.7 0.96
jmespath 1.8 1.8 1.0
joblib 9.0 9.0 1.0
jsonschema 5.6 5.5 0.98
jupyter-client 5.8 5.7 0.98
jupyter-core 3.6 3.6 1.0
jupyterlab-pygments 0.8 0.7 0.87
jupyterlab-widgets 4.2 4.1 0.98
keras 4.3 4.2 0.98
kiwisolver 15.6 15.5 0.99
labelme 8.2 8.2 1.0
libclang 1.9 1.9 1.0
majora 0.5 0.4 0.8
marshmallow 15.4 15.3 0.99
marshmallow-dataclass 4.8 4.7 0.98
marshmallow-oneofschema 1.4 1.4 1.0
marshmallow-union 0.7 0.6 0.86
matplotlib 52.0 51.8 1.0
matplotlib-inline 0.6 0.5 0.83
mistune 3.8 3.8 1.0
mlflow 5.6 5.5 0.98
msal 3.6 3.5 0.97
msal-extensions 1.1 1.1 1.0
msrest 7.1 7.0 0.99
mypy-extensions 0.7 0.6 0.86
nbclient 2.9 2.8 0.97
nbconvert 5.2 5.1 0.98
nbformat 2.4 2.4 1.0
nest-asyncio 3.0 3.0 1.0
networkx 9.6 9.6 1.0
nose 1.1 1.0 0.91
notebook 8.7 8.6 0.99
numpy 103.1 102.8 1.0
oauthlib 2.8 2.7 0.96
opencv-python 59.8 59.7 1.0
opencv-python-headless 48.0 48.0 1.0
opt-einsum 1.1 1.0 0.91
packaging 3.9 3.8 0.97
pandas 61.7 61.5 1.0
pandocfilters 1.0 0.9 0.9
parso 2.6 2.5 0.96
pep517 1.5 1.4 0.93
pexpect 1.4 1.4 1.0
pickleshare 1.2 1.1 0.92
pip 10.3 10.2 0.99
pluggy 1.9 1.9 1.0
portalocker 2.7 2.7 1.0
prometheus-client 2.7 2.6 0.96
prometheus-flask-exporter 3.4 3.3 0.97
prompt-toolkit 12.0 12.0 1.0
protobuf 75.0 74.8 1.0
psutil 52.1 51.9 1.0
ptyprocess 0.8 0.8 1.0
py 4.2 4.1 0.98
pyasn1 15.4 15.3 0.99
pyasn1-modules 12.0 12.0 1.0
pycocotools 0.5 0.4 0.8
pycparser 1.3 1.2 0.92
pymap3d 3.6 3.5 0.97
pyparsing 12.8 12.7 0.99
pyproj 27.7 27.5 0.99
pyrsistent 6.6 6.6 1.0
pytest 13.1 13.0 0.99
pytest-cov 3.3 3.3 1.0
python-dateutil 2.6 2.5 0.96
python-editor 0.9 0.8 0.89
python-json-logger 1.4 1.3 0.93
pytz 32.6 32.5 1.0
pyzmq 67.5 67.4 1.0
qtconsole 3.9 3.9 1.0
querystring-parser 0.6 0.5 0.83
requests 9.8 9.8 1.0
requests-oauthlib 1.9 1.9 1.0
rsa 3.9 3.8 0.97
s3transfer 3.1 3.1 1.0
scikit-image 21.2 21.1 1.0
scikit-learn 52.4 52.3 1.0
scipy 58.9 58.7 1.0
sentry-sdk 13.8 13.8 1.0
setuptools 58.0 57.8 1.0
six 2.3 2.3 1.0
sklearn 0.2 0.2 1.0
smmap 1.2 1.1 0.92
snowballstemmer 0.8 0.8 1.0
sphinxcontrib-applehelp 0.5 0.5 1.0
sphinxcontrib-devhelp 0.5 0.5 1.0
sphinxcontrib-htmlhelp 0.7 0.6 0.86
sphinxcontrib-jsmath 0.4 0.4 1.0
sphinxcontrib-qthelp 0.6 0.5 0.83
sphinxcontrib-serializinghtml 0.8 0.7 0.87
sqlparse 2.1 2.0 0.95
tabulate 1.6 1.5 0.94
tensorboard 5.0 4.9 0.98
tensorboard-data-server 1.2 1.2 1.0
tensorboard-plugin-wit 0.5 0.5 1.0
tensorflow 41.3 41.1 1.0
tensorflow-estimator 1.8 1.8 1.0
tensorflow-io-gcs-filesystem 6.9 6.8 0.99
termcolor 0.5 0.4 0.8
terminado 2.7 2.7 1.0
testpath 1.1 1.0 0.91
threadpoolctl 0.9 0.8 0.89
tifffile 9.8 9.7 0.99
toml 1.2 1.1 0.92
tomli 2.3 2.3 1.0
tornado 12.4 12.3 0.99
tqdm 14.0 13.9 0.99
traitlets 3.4 3.3 0.97
typeguard 4.2 4.2 1.0
typing-extensions 2.3 2.3 1.0
typing-inspect 1.3 1.2 0.92
urllib3 5.7 5.6 0.98
wcwidth 1.5 1.5 1.0
webencodings 0.5 0.5 1.0
websocket-client 5.0 4.9 0.98
wheel 5.5 5.4 0.98
widgetsnbextension 10.4 10.3 0.99
wrapt 33.0 32.9 1.0
xmltodict 1.6 1.6 1.0
zipp 3.4 3.4 1.0
2 Likes

Great to see some work on this, many thanks for the initiative!

Looking at the Project List specification, 2 questions arise:

  • Was it intentional to drop the un-normalized (real) project name from the list? This information was available in the HTML serialization.
  • Is the url field only there to be consistent with PEP-503 (1.0)? It otherwise seems redundant, because according to the spec the url can be deducted from the name.

It makes the information self-contained. Otherwise you would have to pass around the JSON and the URL to be able to construct/extract all relevant data instead of just the JSON payload.

I went and double-checked PEP 503, and it’s unclear in this area. It states that anchor text must be the “name” of the project.

It’s been a while since I had looked closely at the project list response on /simple/, and I had assumed that it was the normalized name I referenced in PEP 503 TBH, though upon closer examination I see that it’s actually the unnormalized name in practice.

So no, it wasn’t actually intentional.

However, normalized name makes much more sense for the key in the JSON response, so I’m not going to remove that.

I’m also hesitant to add that key. Currently the /simple/ response on PyPI is 20M uncompressed and 3M compressed. The current PEP 691 changes that to 18M and 2.9M. Adding in a name key changes that to 27M and 4.5M [1]. It doesn’t feel worth it to me to add that unless someone feels strongly about it.

It is somewhat redundant, and I thought about removing it. I ultimately didn’t for two reasons:

  1. This makes it an easier diff between the two formats, so integrating with existing projects is simpler.
  2. I want to leave our options open for adding extra information to each project in the future. It felt, odd to make the structure be an empty dictionary like {"projects": {"$name": {}}}, and adding the URL there was the easiest thing to do to resolve that.

Honestly though, I didn’t spend a ton of time thinking about the project list, it’s not really used by any installers anymore, so from an installer POV, it’s largely a vestigial URL. If there’s projects out there using it currently who need something like unnormalized name then I’m open to changes to it.

You still have to pass around the URL (just like you have to do with HTML), because the URLs are able to be relative to the URL that you fetched the response from. HTML allows that, and PEP 691 explicitly says that relative URLs are resolved as if it were HTML (we just don’t have a base url meta tag like HTML does).

I think that’s a positive thing, since it allows API responses to be mirrored byte for byte, which will end up being important for TUF integration[2].


  1. Data generated using pep691.py · GitHub ↩︎

  2. Saying this now reminds me that the status quo for PEP 503 is that mirrors cannot byte for byte copy PEP 503 from PyPI for the same reason, since URLs are allowed to be absolute, and PyPI uses that to point files to a different domain, mirrors have to rewrite /simple/$project/ to point to different URLs in the filename. This is actually a whole other problem that we’ll have to resolve somehow. ↩︎

2 Likes

It’s been about a month since I posted the last update to the PR. The feedback on this PR in that time hasn’t really raised any major concerns that I think the PEP doesn’t already address, and overall, I think that any concerns folks did have, the PEP has ended up addressing. We also have two proof of concept PRs that I wrote that are more or less ready to land after writing tests for them, other than the Warehouse PR which also needs some VCL written. There is also a draft PR for proxpi by @EpicWink that appears to be functional, and maybe even ready to land if this PEP gets accepted, and @brettcannon has indicated he could implement this for mousebounder.

We’ve also got some good data from @EpicWink that suggests that it doesn’t meaningfully affect response size (5% bigger without compression, 3% smaller with), and while it’s not as big of a deal since installers don’t really use that page, this does actually make /simple/ smaller for both uncompressed and compressed.

I think the only real open questions that have come up are:

  • My question about some of the recommendations, but that’s a non-normative section so we can update it at any time, and I suspect we might want to once we have real world experience, so I think that’s fine.
  • The recent question about the unnormalized name being available. I think we can leave that out for now, we can always add in that key later if we decide it’s useful enough since adding keys is backwards compatible but removing them is not.

Given all of that, I’m going to ask @brettcannon to go ahead and pronounce on this PEP, unless someone has some concern or objection that they’ve not yet raised.

4 Likes

I object! I’ve always wanted to say that :stuck_out_tongue_winking_eye:. Hereby (this post) some general feedback I’ve gathered.
I also do still have a major issue I want to discuss (not this post), trying my best to get that finished up as soon as possible!


Abstract

However, due to limited time constraints, that effort has not gained much if any traction beyond people thinking that it would be nice to do it.

This was a bit akward/unpleasant to read. Maybe add commas around “if any” and remove the last word “it”?


Both the terms “canonicalized name” and “normalized name” are used, would it maybe be better to choose one of the two? Could be confusing to use both.


Project Detail

This URL must respond with a JSON encoded dictionary that has two keys, name, which represents the normalized name of the project and files. The files key is a list of dictionaries, each one representing an individual file.

Shouldn’t it be “three keys"? The metadata key was not mentioned. Although the metadata field is not mandatory, I think it should at least be mentioned here.


TUF Support - PEP 458

“But I believe that”

Has this now been confirmed? If so, could we replace “I believe that” with something more factual?


TUF Support - PEP 458
and
Doesn’t TUF support require having different URLs for each representation?

These 2 sections are largely duplicate text. In my opinion, they can either be reduced in size, or the FAQ section can be removed entirely.


Appendix 1: Survey of use cases to cover
This listing is described by the following phrase:

This is how they use the Simple + JSON APIs today:

Nitpicking a bit here :sweat_smile:, but pip lists “Full metadata (data-dist-info-metadata)” (PEP-658), although that isn’t the case right now: Use data-dist-info-metadata (PEP 658) to decouple resolution from downloading by cosmicexplorer · Pull Request #11111 · pypa/pip · GitHub


I don’t fully understand how the following two quotes reconcile?

“All serializations version numbers SHOULD be kept in sync”
and
“since 1.0 will likely be the only HTML version to exist”


I feel like the points from this message have not yet been properly addressed: PEP 691: JSON-based Simple API for Python Package Indexes - #25 by layday

I agree with @domdfcoding that the to-be-deprecated cgi should probably not be used in the code example. Examples will get copied, and will be used. It might be unfortunate that the alternative is more verbose, but if that is the reality of the situation, so be it…

According to the RFC “If no Accept header field is present, then it is assumed that the client accepts all media types.”. Meaning no accept header is present is equal to Accept: */*. So a server must never return a 406 when presented with a missing Accept header. Agree?

Updated the PEP with these changes.

Yea, the original PEP didn’t have the meta key, and I just forgot to update two to three in that spot. Fixed in the PEP.

Yes, updated the PEP.

Dropped the FAQ.

Slight reword to mention that it’s how they use today, or plan to in the near future.

Since it was talking about Content-Types, I meant the content type for version 1, if we make a v2 content type we’re unlikely to ever produce that for HTML.

I’ve updated the PEP to be clearer that versions should be kept in sync across serializations, within a major version, but across major versions do not have that same recommendation. I’ve also clarified that 1.x will likely be the only version of HTML to exist, instead of 1.0.

I’m curious what points you think haven’t been addressed? I see 4 points in that post:

  • Clarification of whether requesting v1 means 1.x or 1.0, which the PEP states:

    Since only major versions should be disruptive to clients attempting to
    understand one of these API responses, only the major version will be included
    in the content type

  • What constitutes a backwards compatible change, which the PEP gives rough guidelines under the “Versioning” section, but explicitly calls out that it is intentionally vague because it is hard to fully express the full set of changes that may or may not be compatible. Future PEPs can decide whether it’s a Major or Minor version bump, and can justify that on their own merits.
  • Being explicit with the latest version, which the PEP already incorporated that suggestion.
  • The recommendation not to add the +html content-type, and only rely on text/html. I don’t agree with “just” sticking with text/html, so I purposely kept the new content type for HTML. I’ve updated the PEP with an explicit FAQ about it.

If there’s something else you didn’t think was addressed I’m not seeing it, 3/4 of that post directly resulted in updates to the PEP, and the remaining one I disagreed with, but I’ve added a FAQ section for it now.

I’ve updated the example, I think it makes it minorly less clear, but it’s not a big deal either way. I’m less worried about the verbosity and more worried that parsing a header isn’t an interesting part of the client request flow, so dedicating more lines of code than needed to it just adds extra noise that makes it harder to understand what’s going on.

A missing Accept header is functionally equivalent to Accept: */* yes, so a server should not respond with a 406.

All of those changes haven’t been merged yet, but they’re at PEP691: More Updates and Clarifications by dstufft · Pull Request #2645 · python/peps · GitHub

3 Likes

@wkoorn (an aside, you have a rather confusing username, especially for this forum — would you consider changing it?)

The grammar / purely readability points can be directly proposed as a PR to the text on the PEPs repo, and one of the editors will review. I would then edit your comments above to focus on the substantive challenges/questions to the text. Edit: Donald posted a response seconds before I posted this! Comment is rendered moot.

A (with PEP editor hat on)

2 Likes

Thanks a lot for these changes!


I agree that I can distract a bit from the actual topic at hand. I’m open to alternatives (if there are), as long as that doesn’t include promoting deprecated modules.


Do you then also agree to change the following section in Version + Format Selection?

  1. If the server does not support any of the content types in the Accept header or if the client did not provide an Accept header at all, then they are able to choose between 3 different options for how to respond:

It now treats a missing Accept header (== Accept: */*) the same as an Accept mismatch. And this would be wrong for option b:

b. Return a HTTP 406 Not Acceptable response to indicate that none of the requested content types were available, and the server was unable or unwilling to select a default content type to respond with.

(full disclosure: I am a colleague of Wouter, though I post this independently)

One of the biggest issues that I see with the PEP is that it claims to represent a sufficiently small change to the underlying data-model that it does not warrant a version increment. I fully support the notion of making the minimal change from which later improvements can be built-out, but I don’t see sufficient justification of why the new API shouldn’t just be called v2 (i.e. application/vnd.pypi.simple.v2+json) if any (breaking) changes are introduced.

As a case in point, the project “list” being converted to a dictionary fundamentally changes the underlying data-model. If I wish to have a type which represents v1 data, should I choose a (sorted) list of projects, or an (un-ordered, as per JSON spec) dictionary of them, keyed by the normalized project name? My personal preference would be towards preserving the non-normalized name and order (since it is easy to normalize, and to construct a dictionary if I want one). I could also imagine the order playing a more important role in the future: for example, I believe it would be easy to add pagination and ordering (by last update) to the project list in a future PEP.

To be concrete about this, I propose that the data-model be explicitly stated in the PEP, as I believe this will help to show breaking changes to the data model more clearly and make it easy to know what is serialization implementation detail (esp. in the case of HTML). I put forward an example of a SimpleIndexFile type, even if in practice the API wasn’t incremented when new features were added:

@dataclasses.dataclass
class SimpleIndexFile_Version1p0:
    url: str
    gpg_sig: typing.Optional[bool]
    requires_python: typing.Optional[packaging.specifiers.SpecifierSet]

@dataclasses.dataclass
class SimpleIndexFile_Version1p1(SimpleIndexFile_Version1p0):
    yanked: typing.Optional[str]  # PEP 592

@dataclasses.dataclass
class SimpleIndexFile_Version1p2(SimpleIndexFile_Version1p1):
    dist_info_metadata: typing.Optional[str]  # PEP 658


@dataclasses.dataclass
class SimpleIndexFile_Version2p0:
    filename: str
    url: str
    hashes: typing.Dict[str, str]
    requires_python: typing.Optional[packaging.specifiers.SpecifierSet]
    ...

(note: this is a bit simplified, since it doesn’t deal with the nested type definitions which would be necessary to document the datamodel properly)

For the same reason of data model breakage, the “latest” concept, which goes on to be discouraged (at least, this is how I read “It is recommended however, …”), seems like an unnecessary complication. If you know which metadata you are interested in using from a client implementation perspective, you already know which versions you support and so don’t need the “latest” concept at all. Since the concept of “latest” is entirely optional and client/request-side (the server can respond with whatever it likes), the latest concept is something that can be added later on if necessary, I believe.

To summarise, the list of proposals that I would be interested to have feedback on:

  • Document the datamodel in the PEP (either as Python types, or as a JSON schema)
  • “Project list” becomes a (sorted) list again (if you remove the URL to be part of the metadata definition, then this can represent a 40% pep691.py · GitHub reduction in compressed size compared to today)
  • The JSON response is called v1.1 if new concepts are introduced but no old ones removed, or v2 if breaking concepts are introduced to the data-model (as per the dataclass definition).
  • The unnormalized name is included (either as the “name” concept, or in some new key) in the project list if the JSON response continues to be called 1.x. For 2.0 it is totally reasonable to remove it from the project list/dictionary (the non-normalized name is actually more useful in the project detail page, but adding this is a proposal that can be easily made after the PEP, since it would be additive)
  • Consider whether dropping the “latest” concept from PEP is reasonable (and whether it indeed can be later proposed in a subsequent PEP if necessary)

To be fair, the data-model is already fairly explicit in both PEP691 and (to a much lesser extent) PEP503. The problem with the way it is structured in the PEPs though is that it is harder to see breaking data-model changes when they are written as bulleted prose (code is easier to comprehend in this regard, though I accept that this is subjective).

Speaking from experience, the only change in the data model in terms of how you may represent it as a Python class is you can have multiple hashes compared to under the HTML representation only having one. That’s pretty minor and had I thought things through I probably would have been more flexible in how it was represented in mousebender.

But another way to look at it is it is already a major version change: from version html to version json, both starting at a minor version of 1. In the end I don’t think it really matters since this PEP asks you to explicitly opt into the JSON format, so there isn’t really any confusion on the consumer side of what you’re getting.

But I will also fully admit I am known for not liking SemVer, so I have a bias to begin with. :grin:

FYI I plan to give folks up to a week to file an feedback/objections until this Friday, June 17, at which point I will consider the PEP ready for me to review (when I have time :sweat_smile:).

I don’t see the PEP as requiring serializing the information exactly the same between different serialization formats. That obviously can’t happen because not every format supports the same data types or the same constraints, and I think things will generally be more positive of a user experience if each serialization format is free to serialize the same data in whatever form makes the most sense for that serialization format.

The question to me is whether the two serialization formats are serializing the same data or not, not whether the line format takes the exact same shape or not. In other words, it’s not intended that you can swap between serialization formats by just blindly swapping between html.parse, json.loads(), or a hypothetical other serialization format. In some cases, you may be able to do that because two serialization formats are similar enough, but that’s not hardly a requirement of this PEP.

When I look at what is being serialized between the HTML serialization format and the JSON serialization format, the data is the same data being serialized, the only difference is HTML and JSON are representing that data in whatever way makes the most sense for their respective formats.

The only real differences in terms of the data that is modeled after PEP 691 is:

  • There is allowed to be multiple hashes
    • The PEP allows more featureful serialization formats to have data that doesn’t exist in the less featureful serializations. Hashes exist in both, the JSON just supports more than one.
  • We mandate normalized name in the /simple/ index.
    • I consider PEP 503 ambiguous here. It says “name”, which could be either the unnormalized or the normalized name. I went and looked what is being done in practice, currently as implemented Warehouse (the main implementation behind PyPI) displays the unnormalized name, but our fallback mirror uses the normalized name, so from PyPI you could get either currently. Other implementations seem to be using the normalized name.

So we’re adding an extra feature to the JSON serialization, and we’re making an ambiguous statement in PEP 503 less ambiguous by specifying which of two options you should pick. Neither of which is a major change to the underlying data model IMO.

The question of pagination or similar I think is a red herring. Outputting an unordered collection doesn’t mean that the input to that response has to be unordered. There’s nothing stopping us from paginating a dict response, and just saying that the response is paginated by some key.

And while it is easy to go from a sorted list, to a dict, it’s just as easy to go from a dict to a sorted list, so that’s not really a useful concern.

I’m not married to the latest version. I added it because I thought it would be useful for people who want to specify that they want a specific serialization format, but they don’t care about the specific version they get. This would be most useful for people who are just manually exploring the API.

I’m not sure that I see a ton of value here, but it can be added like people want it, but like I said, I don’t think the data model has to match the serialization on the wire, it’s more that the same data is being serialized.

PEP 503 does not make any claims about the ordering of items on either response, and implementors are free to put them in any order they want. So while they’re technically sorted, by nature of the fact HTML requires them to be in some order, their order has no meaning and can change at any time, including on every page load.

On PyPI they are ordered by normalized name because its’ convenient to have the page have a deterministic output since it makes debugging the CDN easier, and normalized name is just the field we happened to pick.

That’s an implementation detail of PyPI though, the underlying data is best thought of currently as a set. I chose to make it a dict in JSON because I felt that was a more natural way to express that data in JSON.

I’m torn on removing the url completely. PEP 503 does explicitly say the url is a required part of the content, and it would be required for the historical purpose of that API… but that historical purpose is kind of not really useful anymore, so the page overall is a bit of a vestigial appendage on the API, something we preserved mostly to keep backwards compatibility with clients using pre PEP 503 normalized URLs on servers that can’t reliably redirect to the normalized name.

I’d point out that removing the url is where all your savings come from in that case, putting the name in is always an increase in the response size. For instance, if I have a structure that uses a dict mapping normalized name to an empty dict (so the same as pep691 is now, but dropping url) I get even smaller than the list with no url (9M Uncompressed / 1.8M Compressed vs 7.2M Uncompressed / 1.7M Compresed).

There’s really three questions here:

  1. Should we represent the projects on the index page, in JSON, as a list or a map.
  2. Should we include information on the non-normalized name, the normalized name, or both.
    • An important thing of note here, is that given PEP 503 never mandated non-normalized names, is that they might not even be available for some implementations of PEP 503. I am aware of at least one implementation (that is internal to a company) where that is the case, so I don’t think it’s even possible to mandate normalized names. So we could either leave it ambiguous or mandate normalized names, since you can always go from an unknown type of name to a normalized type.
  3. Should we include the URL.

Each of those questions impacts the total response size in some way, I personally still feel comfortable with the decisions made in the PEP regarding those three questions (map, normalized, yes). I’m struggling to think of a use case for the API as it exists in PEP 503 that those decisions don’t cover.

I don’t think v2 is appropriate here, as the underlying data model is fully backwards compatible. You could argue for v1.1, but I don’t think it’s a useful distinction, nothing new is being added to HTML, just JSON is being added.

As said above, PEP 503 doesn’t specify what kind of name should be used on the project index, and in practice both types of names are in use, so from my POV, the non-normalized name was never guaranteed in PEP 503.

I don’t feel strongly about it either way. It can definitely be added later since it’s just another content type, if folks don’t think it’s useful it’s easy enough to strike it.

Ironically enough, the only thing that caused pip any problems when I implemented that, was yanked, and that’s because pip’s internal representation matches HTML exactly, yanked=None means it’s not yanked, yanked=Any means it’s yanked, and yanked=str means it’s yanked with a reason. That didn’t require changing their data model though, just adding some extra deserialization logic after the json.loads().

Having hashes not attached to the URL will likely require some minor python class changes.

2 Likes