Allow any subclass of numbers.Real in the creation of fractions.Fraction

fractions.Fraction should accept numbers.Real instead of just numbers.Rational.

Since all subclasses of Real should implement a conversion to float and we already accept some precision issues with float, I see no reason to not allow the creation of a Fraction using a Real number.

...
# Currently line 247 in fractions.py
elif isinstance(numerator, (float, Decimal)):
    # Exact conversion
    self._numerator, self._denominator = numerator.as_integer_ratio()
    return self

# Proposed addition
elif isinstance(numerator, numbers.Real):
    self._numerator, self._denominator = float(numerator).as_integer_ratio()
    return self

elif isinstance(numerator, str):
...

My current use case for this is a class that internally uses Fraction to store its value with “infinite” precision (or in case of float / Real a good enough approximation).
The part of the setting function of the internal value that looks like

...
elif isinstance(value, (str, Decimal, numbers.Rational)):
    return Fraction(value)

elif isinstance(value, numbers.Real):
  return Fraction(float(value))
...

would turn into

...
elif isinstance(value, (str, Decimal, numbers.Real)):
    return Fraction(value)
...

It’s not much shorter, but would prevent the need to write two checks every time someone wants to convert a Real number into a Fraction.

Is there any issue I overlooked?
Is this to much of a niche case?

Converting Decimal or float128 to float may lose precision.

No, we don’t. A float is an exact representation, and that exact value can be found via the as_integer_ratio method.

I could accept the idea that converting anything that has an as_integer_ratio method into a fraction is a reasonable enhancement, but converting via float does lose precision, not in converting the float to a fraction, but in converting the source type to float.

But given that numbers.Float doesn’t imply the existence of as_integer_ratio, you’d have to do this by checking for the attribute, and you don’t need to change the Fraction class for that:

def as_fraction(num):
    if hasattr(num, "as_integer_ratio"):
        return Fraction(*num.as_integer_ratio())
    return Fraction(num)
1 Like

@storchaka Shouldn’t Decimal already be covered by the first elif?
@pf_moore My comment about the precision of float was about

That’s about converting decimal fractions to float, which does lose precision. But once you have a float, it’s exact and there are no precision issues. Converting a float to a fraction and back again returns the same value: float(Fraction(flt)) == flt is True. The same is true of any type you can pass to the Fraction constructor.

Your proposal will violate that identity for types that can’t be converted to float without loss of precision, even if those types can only represent rational numbers.

2 Likes

I didn’t have this identity in mind, when I was posting this proposal.
Thank you for explaining.

Proposal withdrawn.

1 Like

The idea of @pf_moore looks more reasonable. Similar idea was already discussed, but rejected, because the proposed implementation used as_integer_ratio() also in the two-argument form. Limiting this to the single-argument form is more reasonable.

I have re-opened that issue.

2 Likes

We accept them because the float type is already known and understood not to represent exact “real” values (because you can’t actually represent an irrational number in finite space, by definition).

But if I’m doing a symbolic calculation and I want a custom type that stands for the exact, platonic concept of pi (i.e., not the float value provided by math.pi), I would be surprised and disappointed if someone else fed it to Fraction and got a rational number.

It would work anyway to convert to float explicitly first (since __float__ is part of the numbers.Real ABC).

I would say that we do. Even taking the model that the float represents its .as_integer_ratio result (or Fraction conversion) exactly, the operations are not exact. The set of such exact values represented by the float type is not closed under, say, exponentiation, yet we define ** anyway.

As a disclaimer, I don’t personally have any need for, or expertise in, the details of floating point, but I will say that’s not an entirely accurate statement. In terms of the bigger picture, I get what you’re saying, but the argument doesn’t hold when you look closer.

Floating point exponentiation is precisely defined[1] by the IEEE standards that cover floating point. The rules are exact, in the sense that given the same inputs, the same output will always be produced.

The “inexactness” appears because people use floating point numbers to model the mathematical real numbers. And like any model, the correspondence isn’t 100% (if it were, it wouldn’t be a model!) So, for example, the float 1.0 and the float 3.0 are used to model the mathematical real numbers 1 and 3. Floating division is used to model real division. But there is no floating point value that exactly models the real value 1/3. What we do have is a number of guaranteed properties of floating point division that ensure that the float value calculated as 1.0/3.0 models a mathematical real number that is close to the result of the mathematical real number 1/3. So, in that sense, floating point division is a good, but not exact model of mathematical real division[2]. That closeness is carefully defined by the standards, and you can calculate it exactly (using operations like as_integer_ratio, in fact). But characterising that as “floating point loses precision when calculating 1/3” is an extremely superficial way of looking at things. It’s superficial in a way that’s entirely sufficient for most people (including me!) but there are experts and specialists (a number of whom have been instrumental in developing Python’s floating point implementation) who rely heavily on being able to view floats in terms of deterministic and exact operations.

Sorry - switching back from pedant mode now.

The relevance of this to the OP’s proposal is that the Fraction constructor is an exact conversion from one exact representation to another, facilitated by the as_integer_ratio method which is defined to be exact itself. Nothing in the existing code makes any sort of lossy conversion from one type to another. That’s what the OP’s proposal introduced - going from a user defined type to Fraction via a potentially lossy conversion to float. The final proposal, to respect as_integer_ratio when user defined classes provides it, replaces that lossy conversion with an exact representation that the source class has full control of, and responsibility for.


  1. Actually, I’m not sure if exponentiation is a primitive defined by the standards, or a derived operation like, for example, sin or exp. But it doesn’t really matter, the arguments remain the same for other primitive operations. ↩︎

  2. It’s the model that’s inexact, not the floating point numbers! ↩︎

I know. What I mean is that the defined, consistent, rule-abiding result is not the exact mathematically correct result (which, in the general case, is not a rational number). I further posit that “precision issue” is a reasonable term to describe that fact.

I agree with your objection to the original proposal and that going through as_integer_ratio is an appropriate remedy (as it allows the class to refuse such conversion). It was just a semantic argument and isn’t really worth this many words.

1 Like

The general problem with numbers.Real in the numeric tower is that it does not include enough to do anything useful while writing code that is targeted at the ABC rather than a concrete type.

The as_integer_ratio method is not part of the ABC. It is also not generally what you would want to use for anything other than conversion to Rational. In practice most numbers.Real types are floating point types but the ABC gives no way to know if a given object is of some type of floating point and no way to query any properties of the particular floating point representation.

A more general conversion between numbers.Real types should provide a way to distinguish Rational from floating point vs anything else that is numbers.Real and also a way to know the base precision etc of the format. The ideal conversion function rather than as_integer_ratio would be something like:

mantissa, base, exponent = obj.fp_tuple()
# obj == mantissa * base ** exponent

This is needed because as_integer_ratio is horribly inefficient for large numbers:

>>> decimal.Decimal('1e100000').as_integer_ratio()
...
ValueError: Exceeds the limit (4300 digits) for integer string conversion; use sys.set_int_max_str_digits() to increase the limit

The only conversion function defined by the ABC is float(obj) but the whole point of other floating point types is that they have have greater precision, different base etc so that they can represent values that float cannot.

If they are floating point types (with a rational base and integer powers), they should subclass Rational, not Real. This includes the stdlib Decimal type. [1]

The reason there aren’t any good conversion functions away from Real is that there is no good representation of such objects that doesn’t lose an infinite amount of precision, except for the original object itself. So either the consumer can understand the original object directly or they are willing to lose a lot of precision.

But I agree that as_fp_tuple seems like an interesting suggestion… for Rational types.


  1. which doesn’t subclass either for a different reason ↩︎

1 Like

No they should not. The numbers.Rational type should be reserved for exact representations of rational numbers that have exact arithmetic and no special values like nan or inf. Examples of exact rational numbers types:

  • fractions.Fraction
  • gmpy2.mpq
  • flint.fmpq
  • sympy.Rational
  • sympy.QQ
  • …

This is not true for floating point formats which can be represented exactly as m \times b^e with exact integer values for m, b and e. This representation can exactly represent the values of floating point types like:

  • float
  • decimal.Decimal
  • mpmath.mpf
  • gmpy2.mpfr
  • numpy.float16
  • sympy.Float
  • …

What is needed is a way to know whether a numbers.Real uses a floating point format and then to be able to get the floating point representation in terms of numbers.Integral. Also what is needed is a uniform way to check for special values like nan.

2 Likes

What are you basing this definition of? I am using the mathematical definitions of Real and Rational, which clearly says that all floating point numbers are Rational. Real classes would be able to also represent numbers like e, pi or sqrt(2). If these classes are supposed to represent something else, they need to be defined somewhere.

Which is why they can only represent rational numbers. Maybe we should introduce a new class as a subclass to Rational, named FloatingPoint or something.

No it doesn’t because inf and nan are not rational numbers or real numbers. Don’t get too hung up on the mathematical definitions. What matters in practice is that there are different approaches for numeric computing. The value of computing with rationals as distinct from floating point numbers comes from them having exact arithmetic. If floats were also numbers.Rational then numbers.Rational would be a useless designation.

The original PEP 3141 had:

class Rational(Real, Exact):
   ...

but the Exact type was not included in the end presumably because it would not have been used for anything in the stdlib besides being a nominal superclass for Rational which is redundant if we understand that Rational implies exact arithmetic.

It is very obvious that the numbers ABCs are intended to correspond to the builtin/stdlib types:

  • numbers.Complex → complex
  • numbers.Real → float
  • numbers.Rational → Fraction
  • numbers.Integral → int

Hence:

In [8]: isinstance(1.0, numbers.Rational)
Out[8]: False

In [9]: isinstance(float('inf'), numbers.Rational)
Out[9]: False

In [10]: isinstance(1.0, numbers.Real)
Out[10]: True

In [11]: isinstance(float('inf'), numbers.Real)
Out[11]: True

Unfortunately the original PEP overlooked the fact that 99% of the time numbers.Real means floating point numbers and omitted including any useful abstract methods for floating point.