You make a good point, Alyssa.
I remember when the language S first came out at Bell Labs (which partially evolved into a public domain language called R I often use) and the name implied Statistical computing. (R is a sort of joke on --S.)
The point was that for serious statistics work, you wanted a language with good data types and vectorized functions that allowed fairly rapid calculations and an easy way to describe complex operations such as multiplying matrices.
That was not the main goal of python where the original idea was to base much of the language on a more flexible concept of a list and a few other data structures that also can contain many different things and thus are not as efficient to store or manipulate.
The main way for Python to compete is to create additional modules or collections of modules and mainly use a somewhat different concept and environment within that zone. Numpy and it’s cousin pandas are one of many such examples. Often many problems start by taking a few items in a python program that are stored in lists and/or dictionaries and various collections or iterables and copying them into numpy arrays or pandas dataframes. Once there, in an environment largely written in compiled code in libraries, you can write what is python code but does many things very fast as in vectorized. When done, you may optionally switch the results back into more familiar python objects but I suspect more and more places that is not necessary as the numpy type objects are now written to blend in all over the place.
There are other worlds peripheral to python such as various modules that make graphs, scientifc/statistical libraries such as sklearn and a slew of machine-learning and AI tools starting with TensorFlow and then layer upon layer built on top of it, and of course numpy, such as keras.
When I use such tools, I see them as an extension of python that adds functionality that is not built-in but is made possible by an architecture that allows optional expansion.
But although some see taking a mean as a totally common need, others see areas where taking means is part of a larger grouping of code, including working on massive amounts of data. As @ncoghlan points out:
“The standard library version is there for educational purposes and as a convenience for ad hoc scripting rather than being relied on for any kind of serious number crunching activity.”
Much of Python is that way. And it is not a bad thing to then, only when needed, extend python for your purposes.
This reminds me a bit of an amusing little book about Lisp years ago that showed a recursive function to decide if A is greater than B. I will spare you the code except to say that if You asked if a billion is greater than a trillion (assuming you are not in a region like Britain) then the answer is to subtract one from each and call yourself until one or the other number reaches 0. At that point, about a billion deep in the stack unless you have tail recursion implemented, you decide a trillion is bigger!
But is that really a good solution? It is sort of elegant and in principle works for really big numbers on a near infinite Turing machine. But on many computers which say hold a number in 64 bits, there are generally really fast methods to determine if the contents of register A are less than, greater than or equal to the contents of register B – and in constant time.
The point is that toy version of LISP that used recursion for everything is not necessarily best to use that way for many problems but certainly fine for teaching ways of approaching problems, especially small ones. Python is way more than that LISP was but shares some aspects, IMO.