Literals for Decimal and Fraction

For applications that require high numeric precision, having to write every constant number as an explicit decimal.Decimal or fractions.Fraction instance such as Decimal('1.2') makes the code ugly to read.

Since Python supports the relatively esoteric literal of a complex number 1+2j as opposed to requiring the user to write complex(1, 2), I think it makes sense to add support for decimal and fraction literals such as 1.2d and 1/3f as well.

1 Like

Nitpick:

There are no complex literals (complex numbers can be formed by adding a real number and an imaginary number).

Edit: given your 1/3f, maybe it’s not a nitpick. That might likewise be dividing the int 1 by the fraction 3f.

Ah right. Thanks for the correction.

Note that a complex number formed by adding a real number and an imaginary number is stored as a single complex constant by the compiler:

def f():
    return 1+2j

print(f.__code__.co_consts) # outputs (None, (1+2j))

Yes, 1/3f can be interpreted as either 1 / Fraction(3) or Fraction(1, 3), and the result is both Fraction(1, 3).

2 Likes

That’s “constant folding”, and it’s the same thing that lets us have tuple constants and frozenset constants too, even though we don’t (strictly speaking) have literals for them. But arguably, 2j is already a complex literal - there’s no separate imaginary type, so the only distinction is that a complex literal always has a real part of zero. As Stefan said though, that’s really just a nitpick and arguing about terminology.

So, should we have 1.2d as a literal form? Here’s one small problem: Contexts. You can set and get context objects that change precision and other attributes. Should they affect Decimal literals?

I think a Fraction literal would be safer, but on the other hand, not nearly as useful. You could do something like this:

f1 = fractions.Fraction(1)

# ...

one_seventh = f1/7
two_sevenths = f1 * 2/7

which looks almost like a literal, just with the f on the other end. It’s a little less clear for anything that’s not an integer’s reciprocal, and it won’t improve the repr, but it is an option.

2 Likes

IIRC, something like this was already proposed.

Both Decimal and Fraction types are part of the stdlib, but not the core language. Probably, that’s a reason why there is a special literal for imaginary numbers, but not for Fractions.

Personally, I would appreciate a better support for fractions in the core language, perhaps with a some interpreter option to override odd integer division behaviour (return a Fraction, instead of a float). In this case you won’t need special literals for fractions.

But right now, as far as these types just in the stdlib - I don’t think that special literals do make sense.

This just a little more verbose:) On another hand, probably this makes code more obvious for novices. More or less same argument was proposed by Tim Peters against hexadecimal floating point literals (vs current float.fromhex() class method).

This might be more boring while interactive work. But you can easily add simple AST-transformations to rewrite code on the fly. Here is mpmath example:

$ python -m mpmath --no-ipython
>>> 1/3
Fraction(1, 3)
>>> 1.23456
mpf('1.2345600000000001')

Technically, using import hooks you can extend this approach to regular code as well. See e.g. decimal math example in ideas project.

3 Likes

Very good point. So a decimal literal can’t be stored as a constant, and has to be evaluated at runtime at the very spot it’s written so that it can be affected by the context just like writing Decimal('1.2') would.

That’s definitely a viable workaround, although writing f1 * 2/7 still looks rather noisy to me with an additional multiplication compared to a hypothetical 2/7f.

I agree, but only to some extent.

Compared to 1+2j, which is already a recognized representation of a complex number outside of Python, the 1.2d and 1/3f literal formats are going to have to be learned by Python novices instead.

But compared to your proposed hexadecimal floating point literals such as +0X1.921F9F01B866EP+1, a decimal or a fraction literal like 1.2d and 1/3f look a lot less cryptic once they are learned because most people are already familiar with 1.2 and 1/3 and the only thing to learn is the suffix, just like learning the f prefix of an f-string–yes it’s something to have to learn, but it feels intuitive once learned.

Those are cool modules indeed and should indeed satisfy most use cases. Thanks!

The main downside is that AST/source/code object alterations can only be executed as an imported module with an import hook. The main program can’t be transformed this way.

Note that the f suffix is used in other languages like C++ to write float literals. This could add a confusion if it means Fraction in python.

Also this kind of constants was already proposed (and rejected) many times before. What is new in your proposal ?

1 Like

Recently it was discussed if PEP 750 would allow this at the library level:

But I think that’s out in the latest revision.

A different language is expected to have a different syntax. For example, {1, 2} is an array in C/C++ but is a set in Python.

But for sure we can choose a different suffix such as r for rational or something to that effect.

I tried searching for existing proposals before I posted but couldn’t find any. Can you help link me to some?

EDIT: Finally found one here.

Indeed the proposed tag string syntax would be a decent alternative to a dedicated decimal/fraction literal. My only reservation is that the syntax highlighters would likely color them as strings rather than numbers unless the D tag comes directly from the standard library, in which case we might as well have a dedicated decimal literal.

By “out” do you mean “out of the question”? Can you point me to where it says so?

I found an older one from 2008 so the idea is definitely not new.

1 Like

The construction of a decimal like Decimal('1.2') is not affected by the context. Decimals are always constructed exactly. However + and - are context dependent e.g. +Decimal('1.2') and -Decimal('1.2') will trigger context rounding. The question then is whether -1.2d should be treated as a single literal or as unary - applied to the literal 1.2d i.e. Decimal('-1.2') or -Decimal('1.2').

3 Likes

Ah I see. Thanks for the correction. This makes decimal literals eligible for constant folding then.

Great point. I believe treating -1.2d as a single literal as in `Decimal(‘-1.2’) would make the most sense in most use cases, but it’s definitely something that needs extra clarification in the documentation if the feature is to be implemented.

I think this is the latest update:

And “no more “tag” Instead, a single t prefix which passes a Template to your function” I understand to mean D"1.23" would now have to be D(t"1.23") in that approach. Maybe I misunderstood.

Thanks. I think you understood it correctly. The revision kills the main draw of the tag string proposal IMHO and a syntax like D(t"1.23") can be already achieved by aliasing Decimal as D, but without an official decimal literal the syntax highlighters will not treat D("1.23") as a number.

Thanks a lot. I see that the main objection to the proposal at the time was the lack of a C implementation, which is no longer the case now.

But the proposal at the time included “make Decimal a built in”. The main objection to simply adding a decimal literal is that the core interpreter doesn’t know about the decimal type.

I see. Since we now have _decimal as a C extension module, it wouldn’t be too terribly hard to migrate it as a built-in type would it?

I make a habit of never assuming “it won’t be too hard” unless I’ve done the research and know I’d be capable of doing the work myself if needed.

To put it another way, I don’t know, but I’d assume it would be significantly harder than you seem to expect. Certainly hard enough that the benefit would have to be a lot more than the difference between D("1.2") and 1.2d.