Alright, so here goes.
I imagined an inline function could be defined with the inline keyword. So lets take an example of a function that performs arithmetics as you suggested above.
inline def multiply(a,b):
return a*b
Now the function can be called as any normal python function
def continuous_multiply(first_iterable:list,second_iterable:list) -> None:
for x,y in zip(first_iterable,second_iterable):
print(multiply(x,y)) #inline function called here
Now what I imagine is that the multiply function shall be cached within continuous_multiply function for quick access to such a repetitive task.
If the inline function were defined in a class, the class itself would have to be declared as inline too. This is to bring to cache anything within the class that would be relied on by the inline functions.
inline class Arithmetics:
inline def multiply(self,a,b):
return a*b
using the same continuous_multiply function, the call to multiply function would be something like this
def continuous_multiply(first_iterable:list,second_iterable:list) -> None:
multiplier = Arithmetics()
for x,y in zip(first_iterable,second_iterable):
print(multiplier.multiply(x,y)) #inline method called here
so instead of using dots to reference the function on every iteration, the inlined multiply function shall be cached within the same scope as continuous_multiply function and as a result would speed up execution speed. The dots of course shall still be present in the code, its the optimizations that shall be done at runtime within the interpreter.
Basically what I was looking at are cases where lots of iterations are performed and it would be hard to gather the data and instructions to operate on the data through out the entire iteration process. so bringing closer whatever is needed for the time of iteration through caching would be a step at increasing speed of execution I presume.
The cache can be emptied after the loop or anything keeping the cache active is complete (so maybe there could be an internal reference count of sorts)
The reason for such a feature would be to replace and probably optimize something many pythonistas and I have been doing to create our own caches, which goes something like
def continuous_multiply(first_iterable:list,second_iterable:list) -> None:
cached_multiply = Arithmetics().multiply
#or cached_multiply = multiply #the one declared globally
for x,y in zip(first_iterable,second_iterable):
print(cached_multiply(x,y))
So as evidenced, the idea is still vague, and I’m not too conversant with the python internals well enough to know how hard or how easy it’d be to implement such a feature (though I’m studying the cpython source files during my free time, so I think I’ll get the hang of it pretty soon). However, I think it’d be a pretty good optimization point. Perhaps the idea might not only end on this, but could be explored and thought of in different ways, and can also be applied on async functions if possible.
So, that’s it for me.
Thanks Mr Rossum