For a small CPython code fragment of a function, I’d like to record its running time. I found that the execution time of the code is different even though I set the isolated CPU so there is no other processes to use the CPU. It maybe because the memory could also be used by other processes. If I want to get a reliable time list, I need to run it multiple times.
When do I know the time series is stable and I could use the time series to do something?
Is it possible that a program itself is unstable and its execution time is unstable excluding the hardware and memory aspects?
If I use a bare-metal environment, will I get the same time series for a small CPython code fragment of a function by running it several times?
The variation is almost guaranteed to be OS activity or other programs.
From this docs.python section on
higher values […] are typically not caused by variability in Python’s speed, but by other processes interfering with your timing accuracy. So the
min()of the result is probably the only number you should be interested in.
You may want to take a look at pyperf for your timing experiments.