hi @ all,
shortly learned that python is using ‘mpdecimal’ for the decimal module, and now that mpdecimal doesn’t provide some functions, e.g. cbrt, trigonometrics, exp10 and so on, am I right in assumption calculations for such are performed in binary and the converted to decimal results?
TIA for any shed of light
( aside … I’d read about bans for Stefan Krah and Tim Peters … is python trying to get rid of all good people? )
Yes, the mpdecimal (and pure-Python version of the decimal module) have only basic transcendental functions, barely power and log. Not a surprise, it’s just an implementation of the IBM spec.
Though, exp10 (you meant 10**x
, right?) can be trivially implemented on top of available functions.
You ether could implement missing functions: documentation has some examples. But keep in mind that it’s not trivial (there are notes in the code). Maybe PyPI has a package on top of the decimal module, which I failed to find… But I doubt. This rather belongs to the field of scientific computing.
Or, yes, you can try binary arbitrary-precision arithmetic. E.g. the gmpy2 (that wraps MPFR) or the mpmath (pure-Python), or the python-flint (FLINT wrapper). Of course, usual warnings for decimal vs binary arithmetic apply (assuming you will do conversions from and to decimal floating-point numbers), see e.g. 15. Floating-Point Arithmetic: Issues and Limitations — Python 3.13.1 documentation.
Tim is not banned anymore. So, there is a chance he helped you if I fail.
No good news more, but you could ask mpdecimal-related questions in the maillist of that project.
Note that mpmath
can run as pure Python, but will use gmpy2
under the covers if it’s installed. That greatly speeds high-precision work. It’s what I’d recommend (writing your own functions for these things is delicate, specialized work). Whichever “kinda standard” function you want, chances are good mpmath
has it. For example, earlier this week I used its regularized incomplete lower gamma function. I really wanted a chi square cumulative distribution function, but with a bit of squinting that’s the same thing. Which is a possible problem in practice: “advanced functions” have many interconnections and identities, but the docs aren’t going to spell them out for you.
I don’t think so, but it does seem to have it in for “math people” . I’m back now. Last I heard, Stefan will never come back, or Steven D’Aprano (Python’s statistics
module). Mark Dickinson (the very best “numbers geek”) left too, but on his own. Raymond Hettinger is still around, but less active lately on numerical stuff. We’re very fortunate that @skirpichev stepped up!
Only for integer/rational arithmetic. (Unfortunately, so far — with a price that you might loss hours of computation due to a memory error.)
Using gmpy2 as a MPFR interface should be much faster in most cases, but you are right - the MPFR has less special functions. And gmpy2 has even less functions if you go to complex domain. IMO, the FLINT (via it’s cython wrappers) is a best competitor to both mpmath and gmpy2. But it has more complex model for real numbers, not just a plain floating-point arithmetic. That has advantages like attached error analysis. And a price (speed) for all such “included batteries”.
It’s a very poor “replacement”. (“We have no irreplaceable people.” (c), heh?)
I wish eventually SC will reconsider on former decision for Stefan. The decimal module now lacks a maintainer, despite earlier hopes. Of course, we can interact via the mpdecimal project mailist’s (where Stefan kindly answered even on my dumb questions), but it looks wrong for me.
hi, thanks for quick and competent answers,
sorry that I started a multi-topic discussion, for me it’s just one segment ‘view on python’.
I. Hearing about Steven and Mark is sad, sometimes I have talents to see one part of a puzzle ( ban for Tim ) then another one ( ban for Stefan ), estimate a more general problem, poke around and get it confirmed. Having few ‘top dogs’ for ( segments of ) a project is a not only theoretical risk, not having them is difficult too.
II. Also my sympathy for python dropped below half when I learned some time ago that it was aquired by M$.
III. python’s different modules … I simply don’t know them, and think that’s a weak point I share with many users: we simply want calculations done, want ‘math’, studying different modules which we initially don’t even know the names we lack time and interest. That’s a little off from developers POV who are familiar with their ‘field’.
IV. I’m not the ‘normal user’, but poking around in capabilities and shortcomings of ‘math in computers’ in general. For that mpdecimal provides an enormous step ahead in range ( pure overkill ), precision ( pure overkill ) and speed, outperforming IEEE decimals ( my gcc / lipdfp / BID implementation ) by factors! for most functions. However a contest I’m trying to put together ( in vanilla-C, not python ) showed that there are some functions slower in mpdecimal than IEEE, will report later, and some functions missing, Thus I became curious how python deals with that.
My naive acid test was: 0.9999999999999995 and 0.9999999999999994 are different in decimal64, while not in binary64. Calculating ‘asin’ for them should mathematically have a difference near 3.01823E-09 ( Wolfram|Alpha, not ‘neglectible small’ ), but produces ‘0.0’ in python. math.asin(decimal.Decimal("0.9999999999999995"))-math.asin(decimal.Decimal("0.9999999999999994"))
→ 0.0 . Thus the idea that statement calculates in bin64, and the question ‘is there a simple way to get that calculated in decimal64 precision - in python’.
V. small point aside: what is Decimal(‘0E-53’)? The result of decimal.Decimal(0.9999999999999995)-decimal.Decimal(0.9999999999999994)
- no quotes.
This is both off-topic and incorrect.
think some humor, irony should be allowed, and makes it easier for me to pinpoint things. Let’s drop this point as OT and all said that needs to.
My feedback to you is that this does not read as humor and the only thing it makes it easier for you is to get your posts hidden.
hello @ Łukasz Langa,
sorry that I touched / stressed your level of tolerance, it wasn’t intentional. pls. let’s let this thread on technical aspects, but as the overall situation of python looks relevant to you and me I’ll open another thread asking esp. about that.
If you do open another thread, it would be best to keep the facts accurate. See Python Software Foundation for some helpful information about how Python is supported.
EDIT: It’s here: Python - Microsoft deal? / what is allowed to say / ask?.
Here the math.asin got just same decimal literals, i.e. 0.9999999999999995
and 0.9999999999999994
. As they represent same binary floating-point number, 0x1.ffffffffffffbp-1
- you got 0
as a difference.
In mpmath you can compute this difference like this:
>>> from mpmath import *
>>> from mpmath import *
>>> mp.dps = 16
>>> asin(mpf("0.9999999999999995")) - asin(mpf("0.9999999999999994"))
mpf('2.936784648799317665e-9')
>>> f'{_:.5e}' # this require latest pre-release version of the mpmath
'2.93678e-09'
But that’s a computation in binary floating-point arithmetic, just with precision roughly equal to decimal64.
Just zero in non-canonical form, with 0
’s in digits and -53
in the exponent:
>>> decimal.Decimal(0.9999999999999995)-decimal.Decimal(0.9999999999999994)
Decimal('0E-53')
>>> _.normalize()
Decimal('0')
thank you, that is helpful info. So my assumption is correct, python hasn’t extended mpdecimals list of functions, and not having trigonometric functions is one of the few shortcomings in mpdecimal vs. IEEE 754. Perhaps a suggestion for Stefan.
Correct! CPython basically just supplies a Python wrapper around Stefan’s libmpdec
, which still does all the “heavy lifting”.
IEEE 754/854 specify no “advanced” functions at all, apart from sqrt()
. libmdec
already goes beyond what any relevant standard requires by supplyi\ng exp()
, ln()
, and log10()
.
Possibly, but I think it unlikely he’ll tackle them. exp()
, ln()
, and log10()
have something in common: they all guarantee to deliver a result that’s correctly rounded (under the default nearest/even rounding). That guarantee is common across (almost(*)) all libmpdec
’s functions, “in the spirit” of IEEE 754/854.
Exception-free correct rounding can be very difficult to achieve, and especially so in a variable-precision context, but special algorithms are known that can achieve it efficiently for those 3 specific functions. So Stefan implemented those.
Other libraries aren’t so wedded to “always correctly rounded”, and settle for their own meanings of “good enough”. Stefan is certainly capable of writing “good enough” algorithms too, but it doesn’t seem to be an interest of his. He’ll do it “right, to the very last half of a bit”, or not at all.
I’d personally be in favor of adding the basic libm
functions to Python’s decimal repertoire in “good enough” form, “Practicality beats purity”. But I’m under no illusion about how much work that would be. A whole lot, and more than anyone without real-world experience with writing math libraries could possibly guess.
(*) Its power()
function does not guarantee correct rounding in all cases. No reasonably efficient way to do so is known. Look up “table maker’s dilemma” for an intro to why it’s inherently difficult.
as often you are right, I’d just use IEEE decimal and “libdfp” synonym to each other which of course isn’t correct looking in detail.
I didn’t yet check, but remember some work to achieve ‘correctly rounded’ in binary, The CORE-MATH project, just in case it could help in finding algorithms.
I am quite happy that I have an idea of how bad it may be, but no exact knowledge.
however he’s much better and faster than libdfp
oh my god! no, I’m just happy to understand that it’s not about service in restaurants
Excuse me for the off topic, but you’re comparing apples with bananas. Stefan was banned because he had multiple times an objective bad behavior, IMHO. And I write IMHO only because Tim and Stefan are friends, AFAIK.
Those (and similar projects) are aimed at a handful of specific, and relatively tiny, bit widths (no wider then 128 bits, including the exponent bits). For example, for 32-bit floats, it’s quite practical now to check every possible input.
But floats in mpmath
, and in decimal
, can have literally millions (even billions) of bits/digits. “Tricks” aren’t enough - establishing correct rounding in all cases requires proof. No amount of testing can even scratch the surface.
But mpmath
isn’t aiming at correct rounding in all cases. Its results are typically excellent, though, well under 1 ulp maximum error. Its are the kinds of algorithms I’d recode in a Python wrapper for decimal sin()
, atan()
, etc.
Haha! .
I think it’s already been made clear that mentions of my suspension are unwelcome here, so please drop it. The OP and I already did.
I’m happy to talk with anyone about anything, but there is only one topic where such stuff may still be tolerated:
So move it there, or let it be. Thanks!
I would aim to build something more arb-like. I have looked at this but it is a bigger undertaking than I have found time for. Once interval arithmetic is implemented the relevant algorithms for real elementary transcendental functions are described here:
I would certainly start out with simpler algorithms though (much as Fredrik did).
Which listed as a arithmetic operation)
To estimate just volume of code we could look on the MPFR sources. It’s a lot! Maybe as a pure-Python module (still a lot, look on the mpmath, though it’s procedural style for libmp’s functions could increase size of codebase).
But the major problem I see here - there is no demands to include something like this (no feature requests “include sin!”, no packages on pypi, etc). IMO, “Practicality beats purity” is rather against inclusion of elementary transcendental functions. The argument for is “math module has this stuff” (purity).
I hope this will slowly go better, at least for more elementary functions.
The arb-like approach (as already mentioned) is not a free lunch. This is something, however, that might be better than the iv
backend in the mpmath (as an alternative or, rather, a replacement).
As a happy mpmath
user, I don’t care about correct rounding. “All things to all people”, in my experience, tends toward bloated systems that make excruciatingly slow progress. But maybe that’s all we can hope for anymore. Certainly, for numerical work in CPython, we have too few cooks anymore to serve up huge banquets.
As far as “practicality beats purity”, the OP here is my idea of “practical”: they find a lot to like in the decimal
module, but hit a brick wall when they want to use a trig function. Why? All sorts of reasons, but likely none they actually care about. They typically don’t care much about rounding, or speed, or … they just want to stay in an environment they already like. I’ve seen much the same frustrations about decimal
many times on StackOverflow.
The arb-like approach (as already mentioned) is not a free lunch. This is something, however, that might be better than the
iv
backend in the mpmath (as an alternative or, rather, a replacement).
Note that Fredrik Johansson isn’t building on any kind of automated interval implementation. He’s not even using floats. He’s building his own interval bounds “by hand”, as an integer count of ULPs applied to (conceptually scaled by a common power of 2) non-negative integers.
That’s because “speed” is one of his primary goats, and any form of “by magic” interval layer would bog things down. He’s not even using signed integers (mpz
), just unsigned (mpn
). Again to eliminate as much overhead as possible.
In my own experience, what Welvel Kahan kept saying was true: it was almost always the case that the first several tries at programming an algorithm with “by magic” intervals gave output intervals so very wide that they were useless. There are reasons for this, of course, which take experience to worm around: the inputs to arithmetic operations are very often correlated, and worst-case bounds computed via assuming they’re independent are waaaaaaay larger than can actually happen.
The canonical example is that the value of x-x
is exactly 0, regardless of how wide an interval x
may span.
>>> import mpmath
>>> x = mpmath.iv.mpf([-3, 3])
>>> x
mpi('-3.0', '3.0')
>>> x - x
mpi('-6.0', '6.0')
So, no, in reality the uncertainty in the outcome of x-x
didn’t actually double; in reality it shrunk to nothing. Via building error intervals “by hand”, Johansson can avoid such annoyances. But then those manual constructions also need their own correctness proofs.
Note: not at all saying mpmath.iv
is useless. To the contrary, it can be extremely useful. But it’s not magic, and the basic approach is delicate in many real-world situations, to the point of being useless if naively applied.