Add a benchmark run to the buildbot infrastructure

Let me know if “core workflow” is not the proper forum.

The python org has access to a machine to run benchmarks at speed-python.osuosl.org, this is the machine that drives results uploaded to speed.python.org.

We had a short discussion on the speed@python.org mailing list top of the thread which began the discussion of how to run benchmarks tests via a cron job, but it was never resolved.

I would like to propose that there be a buildbot job to run benchmarks, and that a buildbot slave run on that machine. I am willing to help set that up. My desire to do this is tied to getting alternate interpreter implementations (i.e. PyPy) also connected. So my questions are

  • is this the right place to be asking about this?
  • what needs to be done to get a new buildbot benchmark job defined?
  • can we use speed-python.osuosl.org to run a slave for that job?
  • can we also benchmark PyPy?

Buildbots are usually used to trigger a job on each commit. We only have a few buildbot workers running once per day.

I proposed to run a job once per week. I don’t see why buildbot is needed for that. Do you want to run the job more frequently? I proposed once per week because CodeSpeed, the website to display results, is limited to 50 dots, and I would like to display the longest possible timeline. Currently, we display results on longer than 3 years! With one dot per day, we would be limited to less than 2 months.

can we also benchmark PyPy?

Not yet: see Analyze of PyPy warmup in performance benchmarks — Victor Stinner's Notes 1.0 documentation

Someone has to analyze each of the 60+ benchmarks and decide the number of warmups, and then hardcode these values for PyPy in pyperformance.

I can set up a cron job, is there an example chef/puppet task that shows how to do so?

I’m not sure that the benchmark server is controlled by Chef or Puppet. You should look at http://infra.psf.io/ See also my notes at: https://pythondev.readthedocs.io/infra.html