# Allow any subclass of numbers.Real in the creation of fractions.Fraction

fractions.Fraction should accept numbers.Real instead of just numbers.Rational.

Since all subclasses of Real should implement a conversion to float and we already accept some precision issues with float, I see no reason to not allow the creation of a Fraction using a Real number.

...
# Currently line 247 in fractions.py
elif isinstance(numerator, (float, Decimal)):
# Exact conversion
self._numerator, self._denominator = numerator.as_integer_ratio()
return self

elif isinstance(numerator, numbers.Real):
self._numerator, self._denominator = float(numerator).as_integer_ratio()
return self

elif isinstance(numerator, str):
...


My current use case for this is a class that internally uses Fraction to store its value with â€śinfiniteâ€ť precision (or in case of float / Real a good enough approximation).
The part of the setting function of the internal value that looks like

...
elif isinstance(value, (str, Decimal, numbers.Rational)):
return Fraction(value)

elif isinstance(value, numbers.Real):
return Fraction(float(value))
...


would turn into

...
elif isinstance(value, (str, Decimal, numbers.Real)):
return Fraction(value)
...


Itâ€™s not much shorter, but would prevent the need to write two checks every time someone wants to convert a Real number into a Fraction.

Is there any issue I overlooked?
Is this to much of a niche case?

Converting Decimal or float128 to float may lose precision.

No, we donâ€™t. A float is an exact representation, and that exact value can be found via the as_integer_ratio method.

I could accept the idea that converting anything that has an as_integer_ratio method into a fraction is a reasonable enhancement, but converting via float does lose precision, not in converting the float to a fraction, but in converting the source type to float.

But given that numbers.Float doesnâ€™t imply the existence of as_integer_ratio, youâ€™d have to do this by checking for the attribute, and you donâ€™t need to change the Fraction class for that:

def as_fraction(num):
if hasattr(num, "as_integer_ratio"):
return Fraction(*num.as_integer_ratio())
return Fraction(num)

1 Like

@storchaka Shouldnâ€™t Decimal already be covered by the first elif?

Thatâ€™s about converting decimal fractions to float, which does lose precision. But once you have a float, itâ€™s exact and there are no precision issues. Converting a float to a fraction and back again returns the same value: float(Fraction(flt)) == flt is True. The same is true of any type you can pass to the Fraction constructor.

Your proposal will violate that identity for types that canâ€™t be converted to float without loss of precision, even if those types can only represent rational numbers.

2 Likes

I didnâ€™t have this identity in mind, when I was posting this proposal.
Thank you for explaining.

Proposal withdrawn.

1 Like

The idea of @pf_moore looks more reasonable. Similar idea was already discussed, but rejected, because the proposed implementation used as_integer_ratio() also in the two-argument form. Limiting this to the single-argument form is more reasonable.

I have re-opened that issue.

2 Likes

We accept them because the float type is already known and understood not to represent exact â€śrealâ€ť values (because you canâ€™t actually represent an irrational number in finite space, by definition).

But if Iâ€™m doing a symbolic calculation and I want a custom type that stands for the exact, platonic concept of pi (i.e., not the float value provided by math.pi), I would be surprised and disappointed if someone else fed it to Fraction and got a rational number.

It would work anyway to convert to float explicitly first (since __float__ is part of the numbers.Real ABC).

I would say that we do. Even taking the model that the float represents its .as_integer_ratio result (or Fraction conversion) exactly, the operations are not exact. The set of such exact values represented by the float type is not closed under, say, exponentiation, yet we define ** anyway.

As a disclaimer, I donâ€™t personally have any need for, or expertise in, the details of floating point, but I will say thatâ€™s not an entirely accurate statement. In terms of the bigger picture, I get what youâ€™re saying, but the argument doesnâ€™t hold when you look closer.

Floating point exponentiation is precisely defined[1] by the IEEE standards that cover floating point. The rules are exact, in the sense that given the same inputs, the same output will always be produced.

The â€śinexactnessâ€ť appears because people use floating point numbers to model the mathematical real numbers. And like any model, the correspondence isnâ€™t 100% (if it were, it wouldnâ€™t be a model!) So, for example, the float 1.0 and the float 3.0 are used to model the mathematical real numbers 1 and 3. Floating division is used to model real division. But there is no floating point value that exactly models the real value 1/3. What we do have is a number of guaranteed properties of floating point division that ensure that the float value calculated as 1.0/3.0 models a mathematical real number that is close to the result of the mathematical real number 1/3. So, in that sense, floating point division is a good, but not exact model of mathematical real division[2]. That closeness is carefully defined by the standards, and you can calculate it exactly (using operations like as_integer_ratio, in fact). But characterising that as â€śfloating point loses precision when calculating 1/3â€ť is an extremely superficial way of looking at things. Itâ€™s superficial in a way thatâ€™s entirely sufficient for most people (including me!) but there are experts and specialists (a number of whom have been instrumental in developing Pythonâ€™s floating point implementation) who rely heavily on being able to view floats in terms of deterministic and exact operations.

Sorry - switching back from pedant mode now.

The relevance of this to the OPâ€™s proposal is that the Fraction constructor is an exact conversion from one exact representation to another, facilitated by the as_integer_ratio method which is defined to be exact itself. Nothing in the existing code makes any sort of lossy conversion from one type to another. Thatâ€™s what the OPâ€™s proposal introduced - going from a user defined type to Fraction via a potentially lossy conversion to float. The final proposal, to respect as_integer_ratio when user defined classes provides it, replaces that lossy conversion with an exact representation that the source class has full control of, and responsibility for.

1. Actually, Iâ€™m not sure if exponentiation is a primitive defined by the standards, or a derived operation like, for example, sin or exp. But it doesnâ€™t really matter, the arguments remain the same for other primitive operations. â†©ď¸Ž

2. Itâ€™s the model thatâ€™s inexact, not the floating point numbers! â†©ď¸Ž

I know. What I mean is that the defined, consistent, rule-abiding result is not the exact mathematically correct result (which, in the general case, is not a rational number). I further posit that â€śprecision issueâ€ť is a reasonable term to describe that fact.

I agree with your objection to the original proposal and that going through as_integer_ratio is an appropriate remedy (as it allows the class to refuse such conversion). It was just a semantic argument and isnâ€™t really worth this many words.

1 Like

The general problem with numbers.Real in the numeric tower is that it does not include enough to do anything useful while writing code that is targeted at the ABC rather than a concrete type.

The as_integer_ratio method is not part of the ABC. It is also not generally what you would want to use for anything other than conversion to Rational. In practice most numbers.Real types are floating point types but the ABC gives no way to know if a given object is of some type of floating point and no way to query any properties of the particular floating point representation.

A more general conversion between numbers.Real types should provide a way to distinguish Rational from floating point vs anything else that is numbers.Real and also a way to know the base precision etc of the format. The ideal conversion function rather than as_integer_ratio would be something like:

mantissa, base, exponent = obj.fp_tuple()
# obj == mantissa * base ** exponent


This is needed because as_integer_ratio is horribly inefficient for large numbers:

>>> decimal.Decimal('1e100000').as_integer_ratio()
...
ValueError: Exceeds the limit (4300 digits) for integer string conversion; use sys.set_int_max_str_digits() to increase the limit


The only conversion function defined by the ABC is float(obj) but the whole point of other floating point types is that they have have greater precision, different base etc so that they can represent values that float cannot.

If they are floating point types (with a rational base and integer powers), they should subclass Rational, not Real. This includes the stdlib Decimal type. [1]

The reason there arenâ€™t any good conversion functions away from Real is that there is no good representation of such objects that doesnâ€™t lose an infinite amount of precision, except for the original object itself. So either the consumer can understand the original object directly or they are willing to lose a lot of precision.

But I agree that as_fp_tuple seems like an interesting suggestionâ€¦ for Rational types.

1. which doesnâ€™t subclass either for a different reason â†©ď¸Ž

1 Like

No they should not. The numbers.Rational type should be reserved for exact representations of rational numbers that have exact arithmetic and no special values like nan or inf. Examples of exact rational numbers types:

• fractions.Fraction
• gmpy2.mpq
• flint.fmpq
• sympy.Rational
• sympy.QQ
• â€¦

This is not true for floating point formats which can be represented exactly as m \times b^e with exact integer values for m, b and e. This representation can exactly represent the values of floating point types like:

• float
• decimal.Decimal
• mpmath.mpf
• gmpy2.mpfr
• numpy.float16
• sympy.Float
• â€¦

What is needed is a way to know whether a numbers.Real uses a floating point format and then to be able to get the floating point representation in terms of numbers.Integral. Also what is needed is a uniform way to check for special values like nan.

2 Likes

What are you basing this definition of? I am using the mathematical definitions of Real and Rational, which clearly says that all floating point numbers are Rational. Real classes would be able to also represent numbers like e, pi or sqrt(2). If these classes are supposed to represent something else, they need to be defined somewhere.

Which is why they can only represent rational numbers. Maybe we should introduce a new class as a subclass to Rational, named FloatingPoint or something.

No it doesnâ€™t because inf and nan are not rational numbers or real numbers. Donâ€™t get too hung up on the mathematical definitions. What matters in practice is that there are different approaches for numeric computing. The value of computing with rationals as distinct from floating point numbers comes from them having exact arithmetic. If floats were also numbers.Rational then numbers.Rational would be a useless designation.

class Rational(Real, Exact):
...


but the Exact type was not included in the end presumably because it would not have been used for anything in the stdlib besides being a nominal superclass for Rational which is redundant if we understand that Rational implies exact arithmetic.

It is very obvious that the numbers ABCs are intended to correspond to the builtin/stdlib types:

• numbers.Complex â†’ complex
• numbers.Real â†’ float
• numbers.Rational â†’ Fraction
• numbers.Integral â†’ int

Hence:

In [8]: isinstance(1.0, numbers.Rational)
Out[8]: False

In [9]: isinstance(float('inf'), numbers.Rational)
Out[9]: False

In [10]: isinstance(1.0, numbers.Real)
Out[10]: True

In [11]: isinstance(float('inf'), numbers.Real)
Out[11]: True


Unfortunately the original PEP overlooked the fact that 99% of the time numbers.Real means floating point numbers and omitted including any useful abstract methods for floating point.