Why Python is not compiled against C18?

… can’t I try? If there’s a bench suite, I can test if the change is significant or not.

Is there a bench suite or not?

There is: https://pyperformance.readthedocs.io/

Well, this is what I’ve done:

git clone blablabla
pip3.9 install pyperformance
cd cpython
# I have to set CC or it does not work...
CC=gcc-9 ./configure --enable-optimizations
make
sudo ./python -m pyperf system tune --affinity 0
mkdir build/bench
./python -m pyperf run -b all -r --affinity 0 -o build/bench/py3_9.json

Result:

-m pyperf: error: argument action: invalid choice: ‘run’ (choose from ‘show’, ‘hist’, ‘compare_to’, ‘stats’, ‘metadata’, ‘check’, ‘collect_metadata’, ‘timeit’, ‘system’, ‘convert’, ‘dump’, ‘slowest’, ‘command’)

“timeit” it’s like timeit, it requires a statement. command is to bench a command line program invocation. All the other command are for the JSON bench files.

“run” is gone away, there’s apparently no more a bench suite for Python.

I installed pyperformance version 1.6.1.

PS: all this commands was run under an activated venv.

python -m pyperf and pyperformance (or python -m pyperformance) are different things. The run command is from pyperformance, not pyperf.

Ok, this time I’ve done:

# yes, you have to uninstall both...
pip3.9 uninstall pyperformance pyperf --yes
# deactivated my previous venv, created with altinstalled python3.9
deactivate
# create a venv with the compiled python
./python -m venv build/venv
. build/venv/bin/activate
pip install pyperformance
pyperformance run -b all -r --affinity 0 -o build/bench/py3_9.json

It seems now it’s working, using the right python.

Ok, it finished. There are some problems.

  1. sometimes I get

WARNING: the benchmark result may be unstable

  • the standard deviation (x ms) is y% of the mean (z ms)
  • the maximum (w ms) is j% greater than the mean (k ms)

and suggest me to run system tune. But I already run it. The docs says this is caused by a low number of runs. But if I change it, I change for ALL benchmarks.

The only solution I found is to exclude the unreliable benchs, and run them separately with a number of runs ad hoc.

If you have a simpler solution, please tell me ^___^

  1. the pyperformance docs and system tune suggests me to isolate CPUs on Linux:

Linux scheduler: Use isolcpus= kernel parameter to isolate CPUs
Linux scheduler: Use rcu_nocbs= kernel parameter (with isolcpus) to not schedule RCU on isolated CPUs

Should I do it or --affinity 0 and a rebooted system with no other tasks is enough?

See https://pyperf.readthedocs.io/en/latest/system.html documentation if you would like to get more reliable benchmark results. The warning is just a warning. You’re free to ignore it, but at least you have been warned :wink: