I’ll address that a bit since it’s an interesting sociological question that isn’t often asked.
The glib answer is “it depends on the developer”. In general, Python is an open-source project, and in most such most developers “scratch their own itches”. It’s all done in “spare time”, without pay, and most often too with no real recognition of time and effort.
For numerics in particular, Python isn’t at all aimed at numeric computation, and has attracted relatively few core devs with “the right stuff” to do this kind of work.
For the most part, at the start we only strived to write thin wrappers around the platform C’s libm functions, and to implement - in simple and portable ways - the “big int” arithmetic functions needed to support Python’s unbounded ints (for which the platform libm has no support).
There has been no “grand plan” beyond that. People work on what they want to work on, as and when they can make time for it. They’re certainly motivated by legitimate bug reports against code they’re familiar with, but “new features” have a very hard time getting adopted.
Even a world-class expert like Mark Dickenson got some flak for, e.g., adding an innovative (& formally proved correct) algorithm for computing the mathematical floor(sqrt(n)) function exactly for unbounded ints (math.isqrt()).
The original decimal.py was written by a high-school student who got paid a little by a “feel-good” internship program. He did an amazingly good job on it! But it didn’t become suitable (too slow in Python) for “real work” until Stefan Krah released his own libmpdec and contributed careful Python bindings around it.
Beyond that it’s been “a bit here, a bit there”. The last concerted effort I recall wasn’t all that long ago: implementing asymptotically faster ways to do conversions between decimal strings and giant binary ints. All the “number geeks” contributed to that.
Ironically, while that relies in part on the decimal module for its asymptotically superior multiplication algorithms, converting between giant binary ints and decimal.Decimal objects remains quadratic-time. That’s on my own list of “scratches to itch some day”.
Then there are unplanned things. For example, I answered a question on StackOverflow about poor results from numpy’s log1p() function applied to complex arguments. In some ways that was hilarious
Digging into it, I didn’t find an implementation anywhere that actually gave reasonably good results. I thought at first that mpmath was doing a decent job on it, but eventually found cases where it didn’t get any correct bits unless boosting precision to absurdly high values.
It’s hilarious because I thought I could fix it in an afternoon, but ended up crawling on my belly for over a pretty busy week. Looks look everyone just assumed the error analysis done for the real-valued log1p() would just carry over to the complex-valued case, but it doesn’t at all. The complex-valued case was open to entirely new kinds of numerical problems, including catastrophic cancellation so extreme as to destroy all the significant bits.
But non-obvious, applying only in certain cases.
Anyway, I eventually contributed a different implementation to mpmath, which I believe is “almost always” correctly rounded everywhere, and generally using as little “extra” precision as actually need in various cases.
But everything we do carries “opportunity cost” too: what didn’t I do in Python because I was off doing that instead in mpmath? Which is the other hilarious part: log1p(complex) is basically a goofy function to begin with. The real-valued log1p() serves a real purpose, but not the complex-valued one that I can see. numpy’s version, for a complex argument, just adds 1 to the argument and passes the result to its complex-valued log(). A completely mindless “check the box” implementation. So that’s the funny part: all that effort went into something that likely wasn’t worth doing at all 
No idea what “FMM” might mean. Regardless, that’s not going to change in Python. The result of x * x * x is defined by the IEEE-754 standard, and it doesn’t matter to the standard that it may not (and in fact sometimes does not) return the correctly rounded value of the cube of x.
When they differ, pow(x ,3) is typically the better result. But Python defers to the platform libm to do float powers, and has no say in what they return. Python passes on whatever libmreturns, to be compatible with C/C++ code on the same platform.
Both of those (follow the std; play nice with C/C++) are highly desirable for their own sakes.
An experienced numeric programmer who values accuracy above all will stick with the pow(x, 3) spelling, because something that “looks like” a single operation is open to a correctly-rounded implementation. x**3 similarly for the same reason. x*x*x appears guaranteed to suffer at least two roundings in most languages.
Although libm’s pow() need not come with accuracy guarantees either.