I run a bleeding-edge Python as one of my main interpreters. Since it runs a decent amount of code, I ./configure --enable-optimizations so it profiles and optimizes. After a minor change, I’d normally expect to be able to do an incremental build, but the profiling breaks this:
55 | _PyCriticalSection2_BeginSlow(PyThreadState *tstate, PyCriticalSection2 *c, PyMutex *m1, PyMutex *m2,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Python/critical_section.c:55:1: error: source locations for function ‘_PyCriticalSection2_BeginSlow’ have changed, the profile data may be out of date [-Werror=coverage-mismatch]
I tried removing the profiling data for that one file, but then it just complained that there wasn’t any profile for it. My workflow therefore is make clean; make -j for any update at all.
Is there a better way? How do you folks manage small updates?
On Windows, I just endure it. Almost every time I update from the main branch (3 or 4 times a week), it takes “a long time” to recompile, so I’m just used to firing it off and doing something else. I swear it goes even slower if I stare at it and wait for it to finish .
In general for PGO and LTO builds, the norm is to rebuild everything. There no focus on trying to make workflows involving that incremental - in any project - because they are only intended to be used for slow release-candidate style builds.
Cool, thanks. It seemed a little odd that the profile files are all separate (there’s a .gcda file for every .c/.o file) if there’s no way to just flag one of them as invalid, but if that’s the consequences, frankly, I’ll take long rebuild times over slower Python performance, it’s still worth using PGO.
I just build from make clean too. On my M4 Mac Studio it takes less than 2.5 minutes to build with optimization. That’s compared to ~50s for a non-optimized build (which is what I usually do for local development).