Support native Fraction to Decimal conversion

A long time ago, there was some discussion concerning native conversion from fractions.Fraction to decimal.Decimal. The feature request was spurred by an SO question linked in the thread (which has a modern semi-duplicate with more traffic that I’d link if not for the new user post link limit).

The discussion never led to any PR’s at the time, but two amounts of consensus did emerge:

  1. It was unclear what the “right” conversion method is. from_fraction? to_decimal? Implicit in the constructor?
  2. The SO asker really needed arbitrary precision decimal expansions of a fraction, so that’s worth implementing using string formatting.

Over ten years down the line, 2. has been addressed quite directly with float-style formatting for Fraction thanks to @mdickinson (who also proposed 2. in the original thread). The implementation of 2. has brought all of the necessary conversion logic to the fractions module, permitting a call like

my_dec = decimal.Decimal(f"{my_frac:.<prec>f}")

to do the conversion via an intermediary string. It’s just a matter of pushing such a conversion behind-the-scenes with the right builtin method(s).

(Note: the above call is terrible and you should not use it)

Thus, we only need to settle 1. A year prior to the first discussion, in 3.2, both the Decimal and Fraction constructors were changed to accept all types implicitly, instead of needing any from_* methods. The latter still exist, of course, and can be good for catching type errors. I thus think that to_decimal on a Fraction is far less preferable to from_fraction on a Decimal on name alone.

Such a from_fraction method should accept a Fraction object and nothing more. That is, there is no need to “chain” constructors and consider accepting values which themselves construct fractions, like (numerator, denominator). This would also make it more straightforward to update the constructor.

To deal with context, we turn to from_float. The Decimal classmethod utilizes the global context, which is going to be sufficient most of the time. A Context object then has create_decimal_from_float, yielding an obvious analogue. Personally, I find create_decimal_from_float extremely awkward, but given the need to be able to use contexts, its the best we have, and in my opinion beats passing a Context object as a normal argument somewhere.

Those with more experience than I likely have some additional subtleties of implementation and/or design that need addressing. I would like to encourage nonetheless a reexamination of the comments and critiques made over a decade ago. Native conversion is, I think, an obvious utility, even if my exact proposal is not the right direction to take it.

In terms of behaviour, is the effect you’re looking for significantly different from converting a Fraction x to Decimal(x.numerator) / Decimal(x.denominator)? For example:

Python 3.12.0 (main, Oct  2 2023, 18:38:13) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import Decimal
>>> from fractions import Fraction
>>> x = Fraction(11, 7)
>>> Decimal(x.numerator) / Decimal(x.denominator)
Decimal('1.571428571428571428571428571')

or for the bound-to-context version:

>>> from decimal import BasicContext
>>> BasicContext.divide(Decimal(x.numerator), Decimal(x.denominator))
Decimal('1.57142857')

That is, my understanding is that you’re simply looking for a more convenient spelling for the above operations. Is that correct, or are you actually looking for slightly different behaviour somewhere?

Yes, it is just a matter of convenience. It feels a bit silly to have the current ops be the canonical way to convert between the types, especially with the added float formatting.

1 Like