Literals for Decimal and Fraction

True. Will try an implementation as an exercise when I have time but you’re likely right that the benefit may not be worth the effort.

For files that use a lot of Decimal and Fraction values, I expect you could:

from decimal import Decimal as D
from fractions import Fraction as F

And then write literal-like values like D('1.2') and F(1, 3). Such values are not quite as concise as 1.2d or 1/3f, but seems pretty close.

4 Likes

I am reviving this, if only to convince myself that this is not a path I should think about, even though I would really like to have it.

I also have come across wanting a literal for decimals (especially when considering how easy it is to accidentally do something like Decimal(0.1), leading to “wrong” Decimal constants…), though people’s point about a single-letter alias is quite valid.

I’ve been generally of the opinion that people want Decimal much more than they realize, and think that upgrading Decimal to a built-in would be good in raising the notoriety of it.

I think the “novice” argument goes the other way… novices in particular could get a lot of value from being introduced to Decimal because “why is the math result weird” is a constant kind of thing.

I am imagining here that “hardness” here is referring to how a patchset doing this likely would be very hard to get through the pipeline because it would involve the trifecta of little percieved gain, high potential to break something, and some contentious decisionmaking required?

For me the big non-obvious thing is whether 1.1d.__type__ is decimal.Decimal, in particular even if somebody monkeypatched Decimal. Contexts mean that there’s very little possibility of constant folding and the like, and you’re running the Decimal constructor when getting the literal value, but in that case which one?

The straightforward argument would be that the decimal classes would be different. But then you get into the naming of the builtin. Would be nice to name it decimal, but now you have the namespace conflict with decimal itself! So are you naming it Decimal, despite having int/float/complex?

And if they’re the same, are we saying that 1.1d does a module lookup to get decimal.Decimal? Does it hold a reference? Does it use whatever Decimal is in scope (like how Typescript compiles JSX by assuming React is in scope)? That’s weird too!

And in the builtin case, how are you changing the context and the like?

Would have been nice if 3.0 had “upgraded” Decimal to a builtin called decimal. What’s one more messy breaking change in that bundle of changes :laughing:


for fractions, though, I think you quickly lose the plot about the confusion of operation ordering on things like 1/10/3f to make fractional literals seem like mistake-generators. I am biased though. I have never used fractions.Fraction in any serious environment, and have a hard time imagining a professional context in which people reach for Fraction.

And there is a very basic counter for both in that f and d also map to … float and double literals in C-likes. Is that a huge counterargument? No, but it’s something.

What about adding to builtins a Decimal and Fraction functions, which simply call decimal.Decimal and fractions.Fraction:

  1. Does the interpreter allows a builtin to call a function in a module?
  2. Does it solves the problem of having decimal and fraction literals?
  3. Is it too ugly to have Pascal case functions in builtins?

Side question: can’t fractions.Fraction be simply moved to builtins as fraction and delete the module? It seems that module has only that class.

What is the difference wrt proposal “have Decimal and Fraction as builtins”?

Not only fractions module in the stdlib has only one public class or function. E.g. sched or html.parser.

It’s again a question “why not have the Fraction as a builtin?” But to support this you need some other arguments (see e.g. Revise true division for ints? Lets return (optionally) a Fraction!), just the number of public interfaces in the fractions module is irrelevant here.

So “have Decimal and Fraction as builtins” means what I wrote? Simply adding new builtin functions that calls decimal.Decimal and fractions.Fraction?

Is it not a question “why not have the Fraction as a builtin?”, but a question “the problem of having Decimal and Fracion as builtins can’t be solved this way”? This arise from this discussion:

About your link, I don’t understand your proposal. I feel this proposal better, but I have first to understand well your idea. I’ll continue to discuss this in the appropriate topic you linked.

But what’s the difference?

1 Like

How is that a significant improvement over

from decimal import Decimal
from fractions import Fraction

Or for people who like brevity

from decimal import Decimal as D
from fractions import Fraction as F

You that said:

For what I had understand of your post, if Decimal and Fraction are not builtin, the core interpreter can’t construct them from literals. So the improvement is simply the OP idea of literals for Decimal and Fraction.

That’s not true. You can use stuff from the stdlib in the interpreter, with some care (this is more or less same situation as for other cases of several modules with dependencies):

Python 3.15.0a0 (heads/main-dirty:588d9fb84a, Jul 20 2025, 17:29:58) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 1/3
Fraction(1, 3)
>>> 
patch for above example
diff --git a/Objects/longobject.c b/Objects/longobject.c
index 581db10b54..5218529916 100644
--- a/Objects/longobject.c
+++ b/Objects/longobject.c
@@ -4590,6 +4590,20 @@ long_true_divide(PyObject *v, PyObject *w)
     double dx, result;
 
     CHECK_BINOP(v, w);
+
+    PyObject *mod = PyImport_ImportModule("fractions");
+    if (mod == NULL) {
+        goto error;
+    }
+    PyObject *s = PyObject_CallMethod(mod, "Fraction", "OO", v, w);
+    if (s == NULL) {
+        Py_DECREF(mod);
+        goto error;
+    }
+
+    Py_DECREF(mod);
+    return s;
+
     a = (PyLongObject *)v;
     b = (PyLongObject *)w;
 
1 Like

Ok, so I misinterpreted the Paul post.

(Side note: if was not clear :grin:, I’m +1 to either Fraction and Decimal literals)

It would be better just to implement Fraction in C anyway though. If it is worth having literals then it is worth implementing in C. The decimal module has a C implementation already.

I’m really not clear what you think my post was saying. The simple fact is that adding a new literal type to the language is a big undertaking. It’s got nothing to do with whether the type is builtin (although that’s a consideration). If you think otherwise, feel free to try to implement it. Remember to include all the documentation changes, as well as changes to IDEs and other tools that parse Python code, training materials that cover Python’s literals, etc, etc, etc.

This is far more work than simply doing:

from decimal import Decimal as D

one_dec = D("1.0")

I have some sympathy for the idea of literals for decimals and fractions, but realistically it’s a major piece of work with very little practical benefit over the status quo. So I don’t see it happening, in all honesty. There are simply many more useful things to work on.

3 Likes

At some point I remember planning to write a PEP about decimal literals. When I started writing I found that all of the arguments I could come up with were really just arguments for why it would be better to have decimal floats by default rather than binary floats.

I think literals would be useful but it is a big change and a lot of work. I am not volunteering to do that work and I am not going to try hard to persuade anyone else that it is worthwhile.

The primary source of confusion about the unintuitive behaviour of floating point in simple calculations is this: we use the obvious decimal literal notation (e.g. 1.1) as both an input and output format for non-integer numbers but they evaluate invisibly to binary floats that are unable to represent most non-integer decimals exactly.

Most users can understand this because they will have done this kind of arithmetic by hand in decimal at school:

>>> 1/3
0.3333333333333333

This is much harder to understand or explain:

>>> 1.11 - 1.01 - 0.1
8.326672684688674e-17

The simple explanation here is that “floats are approximate” but that is misunderstanding the issue because this is guaranteed to be exact:

>>> 1.75 - 1.25 - 0.5
0.0

A lot of the confusion about the inexactness of floating point comes from this avoidable inexactness of using decimal literals for binary floating point. It makes everything confusing because what you see is not what you get and the numbers even display misleadingly as if exact:

>>> 1.11
1.11

(I think older Python versions used to display enough digits to show that this is not exactly 1.11.)

The interpreter overhead of CPython when using individual float objects is so high that it could be justified to use decimal floating point as the default non-integer type. With PyPy though code working with floats and e.g. lists of floats is much faster although it is a long time since I timed this. For something like NumPy it would not be justifiable to use decimal floating point most of the time.

Previous enthusiasm many years ago around the idea of decimal literals was at least partly based on the premise that decimal floats could one day supplant binary floats in common usage so you would have e.g. 1.1f for a binary float and 1.1D for decimal and maybe from__future__ import decimal to change what 1.1 means and so on. I’m not sure that many would have the enthusiasm to go that far now.

Another issue with either making decimals the default or just with having literals for them is just that Python’s stdlib decimal module is more complicated than you would have really wanted for general use with its multiprecision contexts, flags, traps, non-unique representations and so on. It would probably be problematic to be able turn on things like traps if decimal was used ubiquitously unless lots of code used local contexts for isolation which is a lot complexity when all you wanted to do was sum(nums) / len(nums) in some library code. As a default numeric type and to make literals well defined it would probably better to have some simpler fixed precision type like decimal64 but adding yet another type is potentially confusing.

If you don’t use decimals for non-integer numbers by default then it is still something that you would have to opt into which requires going through the motions of learning about and thinking about the different numeric representations. At that point being able to write 1.1D rather than D('1.1') is nice but it is still a small convenience which makes it harder to justify the change as worthwhile.

The primary benefit of having decimal or fraction literals I think is because code will often end up using literals of one type to create values of another e.g.:

>>> Fraction(0.1)
Fraction(3602879701896397, 36028797018963968)
>>> Decimal(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')

Those two examples are from the stdlib but this propagates out to all other libraries since users want to use the literals as the basic way of writing numbers with whatever library they use. These examples are explicit but the same thing happens implicitly when you write 0.1*x for an object x of any type. You wanted to write e.g.

y = 0.1*x**2 + 0.3*x + 1

but to avoid the avoidable rounding error it would have to be e.g.

y = D('0.1')*x**2 + D('0.3')*x + 1

It might not seem like much but that is so much less nice that people just don’t want their code to look like that. This is the issue with not having literals: end users really want their numbers to “look like numbers” but the form that is provided to do this makes the numbers avoidably inexact. There is a good chance that the library that defines the x type would convert the floats, Decimals or Fraction into some other type of object so what is needed is just a way to get the exact number 0.1 through without going via float as a lossy intermediary.

Both Decimal and Fraction literals would be useful and would be used. More thought would need to be given for how all the numeric types interact with one another if these other types became more prominent e.g.:

>>> 0.5*Decimal('0.5')
...
TypeError: unsupported operand type(s) for *: 'float' and 'decimal.Decimal'

If people find it too difficult to write float | int in a type annotation or just to make sure that they use float consistently then throwing Decimal and Fraction into the mix is going to confuse things. The infrastructure just isn’t there for mixed type interoperability with general numeric types apart from limited combinations like int * X works for all types X.

Where I landed when thinking about this years ago was that decimal literals would be nice but the bigger value would be in having them as the default floating point type and that change is probably not possible. There is still value in having decimal and fraction literals but it is still a big language change and the value proposition is difficult if they seem to many like niche types rather than being/becoming more part of the standard way to use numbers in Python.

7 Likes

I don’t think it’s something that can be done with an extension. Am I wrong? ?_?

Eh? Do you mean that you core devs have to maintain also the all the IDEs that support Py?

No, but you can create a patch for CPython…

Not at all. But proposals have to consider all of the work they create, not just the work that the core devs have to do.

Without a PEP? Even so, how much are the probabilities to be merged?

Zero. But it will give you a feel for how much work is involved. Which was my whole point:

Admittedly, implementing the change is only part of the work. So what you’ll learn will be limited. But honestly, I was just getting tired of people claiming it’s easy to add decimal and fraction literals, without any real understanding of what’s involved. So my response was a bit more snarky than it needed to be. Sorry about that.

2 Likes

This example, particularly the in the case of Decimal, is the biggest footgun / pain point for using Decimal more frequently for me (well, that and a lack of confidence that libraries actually support these types instead of float). It’s weird (and not very ergonomic) that the default recommendation to express the value 0.1 as a Decimal requires using a string, rather than a number.[1].

I want to explore if there might be an avenue to change that behavior so that Decimal(0.1) produces a value equal to Decimal("0.1")?

My naive approach would be to carry over some or all of the logic used for in float.__str__, where the float closest to 0.1 is represented as "0.1" while the next smallest and largest floats are represented differently. Could the same logic be used to identify float values that round nicely to a decimal value?

This proposal would be an important change to the current behavior of the Decimal constructor, which currently reads:

If value is a float, the binary floating-point value is losslessly converted to its exact decimal equivalent.

Rounding as I’ve proposed wouldn’t affect the ability to losslessly round-trip from float to Decimal and back, since float(Decimal("0.1")) == 0.1. It would, however, mean that Decimal objects are no longer guaranteed to exactly equal their float arguments (since Decimal("0.1") != 0.1). I think we’d also need to change the behavior of Decimal.__eq__ so that Decimal(f) == f continues to be true for all finite floats. We’d probably also want some escape hatch (a new class method?) that would implement the current behavior.

This is probably too big of a backwards incompatible change (I hope I’m wrong!). Maybe we just provide a recipe like the following as a recommendation for users who wish to express lots of “natural” Decimal values in code?

from decimal import Decimal

def D(value: float) -> Decimal:
    return Decimal(str(value))

assert D(0.1) == Decimal("0.1"). # true

  1. I am less concerned about Fraction, because Fraction(1, 10) seems intuitive enough to express 1/10 ↩︎

It’s not necessarily about round-tripping though. Sure, in cases where you start with a decimal number, turn it into a float, and then into a Decimal, you win; but at the cost that float->Decimal is no longer lossless. So I do not think that this should be the behaviour of the default Decimal constructor.

However, if you were to spec up an alternative, it could easily then be imported under a convenient name. And the recipe you’ve provided would be an entirely viable option for that. Sometimes, the solution IS just a recipe.