u"..." is a 16-bit wide string literal. For what do you need it in the CPython core?
Write the patch, measure the impact, and if it’s an improvement, let’s get it in with
#ifdef for compilers that don’t support it.
sigh which compilers CPython officially supports?
For what I know, Chromium builds against
C11, depending on the platform, even if the target can be
C99 without problems. Maybe this activates some optimizations?
I think it’s not a bad idea to compile against
C11, or better
C18, if the compiler supports it, even if
C11 unicode will be not used in CPython.
Ones that support C89 with the C99 features listed in PEP 7.
Chromium is not relevant.
You are making suggestions without mentioning what actual problems they will solve. That is not very useful.
Please do that! If you find that it results in an improvement, then let’s figure out how to best move there.
Python is conservative about C language features because
- Python tries to support a lowest common denominator of systems. Traditionally, this has meant supporting many proprietary Unixes each with their own proprietary C compiler. These days, there are a lot less weird compilers around than the 90s, but there are still people compiling Python themselves on very old systems with old versions of GCC or Clang. MSVC has also historically been slow to implement pure C features. That’s why we only started requiring C99 a few years ago.
- New C features aren’t that interesting. The evolution of the C language itself has been conservative for decades.
Write the patch, measure the impact
Ok, do you know if there’s a place in particular where can I can do the patch and test it? Maybe
bytesobject.c or both? Should I test
bytearray or all of them? Is it
timeit good enough? I see do you have a test suite, do you have also a benchmark suite?
and if it’s an improvement, let’s get it in with
#ifdeffor compilers that don’t support it.
Sure. Do you want also a coffee?
New C features aren’t that interesting
Well, I agree, indeed I was interested in threading and unicode only. But C11 threading system is not supported by MSVC…
Compiling Python on Windows with GCC produces a slower build?
No, I don’t know. I don’t know what you are trying to accomplish.
I assumed you want to switch to C18 because something would be faster. But I still don’t know what you think would be faster. Or maybe my assumption was not correct.
It doesn’t matter which is faster; we aren’t going to drop MSVC support.
Well, I respect your decision, even if I must say that if every important project continues to support
MSVC, Microsoft will be never convinced to improve it. Edge was developed because people started to drop IE in favor of Firefox and Chromium/Chrome later. And now we have a lot of programmers that dropped the IE support, for Goddess’ sake.
@encukou: well, in theory unicode literals would be faster. Is there a bench suite in CPython code? I searched in Tools but I have not found one.
Again if there was an extremely compelling reason to use C18, we could consider tightening our compiler requirements, but there isn’t.
… can’t I try? If there’s a bench suite, I can test if the change is significant or not.
Is there a bench suite or not?
There is: https://pyperformance.readthedocs.io/
Well, this is what I’ve done:
git clone blablabla pip3.9 install pyperformance cd cpython # I have to set CC or it does not work... CC=gcc-9 ./configure --enable-optimizations make sudo ./python -m pyperf system tune --affinity 0 mkdir build/bench ./python -m pyperf run -b all -r --affinity 0 -o build/bench/py3_9.json
-m pyperf: error: argument action: invalid choice: ‘run’ (choose from ‘show’, ‘hist’, ‘compare_to’, ‘stats’, ‘metadata’, ‘check’, ‘collect_metadata’, ‘timeit’, ‘system’, ‘convert’, ‘dump’, ‘slowest’, ‘command’)
“timeit” it’s like
timeit, it requires a statement.
command is to bench a command line program invocation. All the other command are for the JSON bench files.
“run” is gone away, there’s apparently no more a bench suite for Python.
pyperformance version 1.6.1.
PS: all this commands was run under an activated venv.
python -m pyperf and
python -m pyperformance) are different things. The
run command is from
Ok, this time I’ve done:
# yes, you have to uninstall both... pip3.9 uninstall pyperformance pyperf --yes # deactivated my previous venv, created with altinstalled python3.9 deactivate # create a venv with the compiled python ./python -m venv build/venv . build/venv/bin/activate pip install pyperformance pyperformance run -b all -r --affinity 0 -o build/bench/py3_9.json
It seems now it’s working, using the right
Ok, it finished. There are some problems.
- sometimes I get
WARNING: the benchmark result may be unstable
- the standard deviation (x ms) is y% of the mean (z ms)
- the maximum (w ms) is j% greater than the mean (k ms)
and suggest me to run
system tune. But I already run it. The docs says this is caused by a low number of runs. But if I change it, I change for ALL benchmarks.
The only solution I found is to exclude the unreliable benchs, and run them separately with a number of runs ad hoc.
If you have a simpler solution, please tell me ^___^
system tunesuggests me to isolate CPUs on Linux:
Linux scheduler: Use isolcpus= kernel parameter to isolate CPUs
Linux scheduler: Use rcu_nocbs= kernel parameter (with isolcpus) to not schedule RCU on isolated CPUs
Should I do it or
--affinity 0 and a rebooted system with no other tasks is enough?
See https://pyperf.readthedocs.io/en/latest/system.html documentation if you would like to get more reliable benchmark results. The warning is just a warning. You’re free to ignore it, but at least you have been warned