Is there a way to find out why Python takes so long to execute scripts?

We have a bunch of Python scripts that analyze our code statically.

This is what we do:

  • The developer uses a custom Linux command called check
  • We find all .git directories on his PC
  • For each .git directory, we find files that are changed and not committed yet
  • For each file, we run a check.py command
  • That check.py command extracts information about the given file, and calls (imports) other python files to check it.
  • Each Python checker file prints problems if encounter them

We have more than 250 of these files separated into directories:

api
panel
    __init__.py
    check.py
    menu.py
    routing.py
site
    __init__.py
    check.py
    component-naming.py
js
css
database
...
check.py

We use 4 timing tools:

  • time check to get the total time
  • date -u +%s.%N to get the time it took to run check.py on each file
  • time.perf_counter() to log each check import (from panel import check)
  • kernprof -l -v to find out the time of each line of execution in detail

The point is, Linux times do not match Python times. time.perf_counter() reports 200 milliseconds, while Linux reports 3 seconds for example.

Our time check Linux command for example shows this result:

real 0m28.848s
user 0m1.257s
sys 0m1.467s

And we also get 1.6 seconds from our one-file timer in our check.sh script.

But then we get this line from kernprof:

Total time: 0.0170627 s

Apparently, there is a huge gap. That gap is apparently inside the Python code. Somewhere this code takes a long time for something and we can’t find out where.

How can we find it out?

You can check the time certain parts of code take to run using this method:

import time
timestart = time.time()
# Do some code here
print(f"It took {time.time() - timestart} seconds")

My bottlenecks are getting records from an SQL database. We probably don’t have an index on fields in the WHERE clause that I use a lot.

Speeding up database

  1. Have an index on each field that is used a lot in the WHERE clause of the SQL statement.
  2. Tune the database cache, or turn it on. This requires admin level access to the database options.
  3. Cache common data on a local drive.

Speeding things up

  1. Look into memoize. The only examples I’ve seen are for using it with math functions that may be repeated with the same numbers. Memoization using decorators in Python - GeeksforGeeks
  2. Caching commonly used data during the testing phase, which I do. At the beginning of the program I read the data from a file into a dict. I use a dict in memory while my program runs, and save the contents to a JSON file when the program is done. It works fine for me.

The time.time() - timestart approach is good for a quick-and-dirty estimate. I use this because it’s easy, like using print() instead of logging and a debugger. If you want a more formal approach, you can use the cProfile module:

# ... your code here ....
import cProfile
cProfile.run('your_function_call()')

This returns output that looks something like this:

4 function calls in 0.064 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.064 0.064 <string>:1(<module>)
1 0.064 0.064 0.064 0.064 test1.py:2(addUpNumbers)
1 0.000 0.000 0.064 0.064 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}

It’s a bit dense and you’ll have to consult the cProfile documentation to understand the different parts, but it’s worth it to get a fine-grain understanding of what the bottlenecks are.

1 Like

To add, cProfile has a convenient command line interface for profiling scripts. Something like

$ python3 -m cProfile -s cumtime ./check.py

may do the trick well. I usually sort by cumtime for cumulative time spent in each function (including all sub calls), which tells you where in your script most of the time is being spent. You can also pass -o <file> to write the stats to a file (instead of stdout).

Since this is a CLI application, import times are also a noteworthy concern, especially if you have any heavy dependencies. You can pass -X importtime when starting Python to profile import times. The output will be written to stderr.

1 Like

Makes no difference. It reports a tenth or even less than the actual time. I run these scripts on multiple files and I can measure time using my handwatch. It seems that Python only reports time of its code, and not the other IO or database operations.