You are right, this is not the case where re-calibration needs to be performed frequently (as opposed to situation where algorithm needs to be continuously calibrated on a new data).
And the way it is currently done is exactly the way you described. It was calibrated once and constants are hardcoded.
However, the work that I am doing now is a proof that changes happen from time to time and if what I am referring to was done last time, my current work would have been much smoother.
Things that could change:
a) Someone decides to improve calibration, because finds proof that it can be done better. (e.g. what I am doing now).
b) New algorithm is discovered and needs to be incorporated into the basket.
c) Low level performance changes of functions that some of algorithms use are significant enough to justify recalibration.
I have done such things many times and having explicit access to such components (that perform the same thing but are mixed together for performance purposes) made my life much easier.
E.g. one python function that I sometimes use:
def insort_left_many(a, xs, lo=0, hi=None, *, key=None, method=AUTO):
if method is AUTO:
method = select_method(a, xs, lo, hi, key)
if method == 0:
return method0(...)
elif method == 1:
return method1(...)
raise ValueError
Initially I had these hard coded the way you are referring.
But after several similar instances as I am currently facing with string_find
I changed my practices and started ensuring possibility for individual access.
Although these are not the cases where continuous calibration is needed, but nevertheless, minimal effort to have convenient access, at least from my experience, proves to be worthwhile.